/ ADDING AI AND MACHINE LEARNING TO IOT

By Jennifer Skinner-Gray, Senior Director, Supplier Technology Enablement, Avnet

As technology matures it becomes part of the landscape. We can’t say that about AI yet, but it is getting closer. Avnet is helping more manufacturers integrate more types of AI and machine learning into more products.

When new technology enters the market, people may question its purpose. The internet of things (IoT) is a good example. How those technologies reshape our lives is more apparent. Smart homes, streaming services and contactless payment are all things enabled by IoT.

The same question could be levelled at AI. We see, read and hear about massive leaps in technology almost daily, but is it really having an impact? Although individual applications of AI may not be obvious, its overall influence is being felt everywhere. Change at this scale impacts every level of every industry. We are feeling that strongly in the electronics sector.

There are now millions of pretrained AI models available to developers. Many are open source, and some are easily portable to embedded platforms. Semiconductor manufacturers acknowledge this as more than a trend, it is the new baseline. Avnet is working closely with its supplier partners to deliver out-of-the-box AI and machine learning experiences that can accelerate the development of new AI-enabled solutions.

Building on an IoT foundation

According to analysts, IoT is now at the early majority stage of technology adoption and will enter the late majority phase as it becomes more prolific. Just as IoT enables today’s connected systems, AI will be a building block for future connected systems. Intelligence will become an integral part of products that use IoT technology.

We can think of AI, particularly at the network’s edge, as being in transition from the innovator to early adopter phase. Adding intelligence to connected devices will drive AI into the early majority phase and help push IoT into the late majority category.

When new technology enters the market, people may question its purpose. The internet of things (IoT) is a good example. How those technologies reshape our lives is more apparent. Smart homes, streaming services and contactless payment are all things enabled by IoT.

The same question could be levelled at AI. We see, read and hear about massive leaps in technology almost daily, but is it really having an impact? Although individual applications of AI may not be obvious, its overall influence is being felt everywhere. Change at this scale impacts every level of every industry. We are feeling that strongly in the electronics sector.

There are now millions of pretrained AI models available to developers. Many are open source, and some are easily portable to embedded platforms. Semiconductor manufacturers acknowledge this as more than a trend, it is the new baseline. Avnet is working closely with its supplier partners to deliver out-of-the-box AI and machine learning experiences that can accelerate the development of new AI-enabled solutions.

Building on an IoT foundation

According to analysts, IoT is now at the early majority stage of technology adoption and will enter the late majority phase as it becomes more prolific. Just as IoT enables today’s connected systems, AI will be a building block for future connected systems. Intelligence will become an integral part of products that use IoT technology.

We can think of AI, particularly at the network’s edge, as being in transition from the innovator to early adopter phase. Adding intelligence to connected devices will drive AI into the early majority phase and help push IoT into the late majority category.

multimodal AI and machine learning chart

The evolution of AI

Technology that is truly disruptive is adaptable, employed in ways that suit the application. This is true of IoT and will be true for AI. There are many forms of AI today, from large language models that underpin generative AI, to machine learning models that take simple sensor data and allow causes to be inferred.

AI models able to operate with minimal guidance are referred to as agents. Agentic AI uses agents to achieve a specific task. Agents will be central to tomorrow's applications, such as autonomous mobile robots (AMRs).

Most AI models use one type of data as an input. At an enterprise level, developers are working on models that can handle multiple modals, such as text, image and voice together to achieve a task. This will remove some of the apparent barriers, such as referring to an object by name when giving a command rather than just pointing to it.

Multimodal AI at the edge is a similar development. Rather than building larger models to handle multiple types of input data, it will combine modals together to work collaboratively in a single application.

Multimodal AI and agentic AI will be crucial developments if artificial intelligence is to become seamlessly integrated into our lives.

block diagram

This block diagram depicts a proof-of-concept developed by Avnet that uses a Multimodal AI model, retrieval-augmented generation (RAG) and text/speech conversion. (Source: Avnet Silica)

THE ROLE OF MACHINE LEARNING

Machine learning (ML) does not feature strongly in the main AI narrative, where the emphasis is on generative AI running on powerful processors in data centers. For embedded systems, particularly control-oriented systems that include some electromechanical elements, ML is more relevant.

The latest trend we’re seeing, and supporting with our supplier partners, is ML using time-series data. This uses the output of a sensor over time to infer an action to be taken to achieve a specific task.

This kind of closed-loop control is typical in embedded systems, but they have historically been immediate in nature, hence the importance of real-time control. Using ML, the time horizon will extend into the future. This approach provides the basis for predictive maintenance but will be adapted to many other applications.

Combining time-series ML with other forms of AI using different modalities will enable complex systems to be realized more practically. The underlying hardware and software solutions are available now and supported by Avnet’s /IOTCONNECTTM platform.

Avnet IOTCONNECT logo

REALIZING MULTIMODAL AI

Compared with enterprise-level systems, embedded systems are regarded as constrained by size, power and cost. We recognize the limitations this puts on software running on embedded hardware platforms. Semiconductor manufacturers are developing more capable embedded processors that deliver the performance needed to run AI and ML, but with close consideration for the power limitations of embedded systems.

The integration of multimodal AI may take several forms:

  1. Multiple models running on a single processor. The processor may feature a single or multiple cores, one of which may be dedicated to vector execution. Alternatively, the core(s) may have hardware extensions for AI acceleration.
  2. A single model, trained on multiple data types. This is the direction enterprise AI is moving, but probably less applicable to embedded platforms due to the resources needed.
  3. Simpler, pruned models designed to use fewer embedded resources. This approach would support multiple models running concurrently on the same processor.
  4. Dedicated hardware for each model. This approach works with time-series ML, for example, where the sensor integrates an ML core for processing data on the device.

Another approach, referred to as model cascading, uses simpler AI models to break down a task. Preprocessing data can trigger a subsequent event, which may involve another model working on the same data. This hierarchical approach can make use of successively more complex models to maximize the available resources.

Multimodal AI can be seen as inevitable and essential to move IoT into its next phase of market adoption. Avnet is working with its supplier partners to develop pretrained models that can be combined on a single platform to deliver multimodal AI. We are adding new AI-enabled integrated solutions to our linecards that will take IoT to a new paradigm.

Semiconductor manufacturers are developing more capable embedded processors that deliver the performance needed to run AI and ML, but with close consideration for the power limitations of embedded systems.

ABOUT THE AUTHOR

smiling woman

Jennifer Skinner-Gray

Senior Director, Supplier Technology Enablement, Avnet


Jennifer Skinner-Gray is senior director of supplier technology enablement for Avnet, leading our IoT strategy. Skinner-Gray drives cloud enablement and secure connectivity strategy to solidify our software and services market approach.