/ 5 MYTHS ABOUT ARTIFICIAL INTELLIGENCE (AI)
(FROM A SOFTWARE PERSPECTIVE)
By Phil Robinson, General Manager, Witekio
Production-ready demo code and quick implementation. That’s what they tell you, but is it too good to be true? We break down some of the myths surrounding AI software implementation.
MYTH #1
Demo code is production-ready
AI demos always look impressive but getting that demo into production is an entirely different challenge. Productionizing AI requires effort to ensure it’s secure, optimized for your hardware and tailored to meet your specific customer needs.
The gap between a working demonstration and real-world deployment often includes considerations like performance, scalability and maintainability. One of the biggest hurdles is maintaining AI models over time, particularly if you need to retrain the application and update the inference engine across thousands of deployed devices.
Ensuring long-term support, handling versioning and managing updates without disrupting service add layers of complexity that go far beyond an initial demo.
Additionally, the real-world environment for AI applications is dynamic. Data shifts, changing user behavior and evolving business needs all require frequent updates and fine-tuning.
Organizations must implement robust pipelines for monitoring model drift, collecting new data and retraining models in a controlled and scalable way. Without these mechanisms in place, AI performance can degrade over time, leading to inaccurate or unreliable outputs.
Emerging techniques like federated learning allow decentralized model updates without sending raw data back to a central server, helping improve model robustness while maintaining data privacy.


MYTH #2
All you need is Python
Python is an excellent tool for rapid prototyping, but its limitations in embedded systems become apparent when scaling to production.
In resource-constrained environments, languages like C++ or C often take the lead for their speed, memory efficiency and hardware-level control. While Python has its place in training and experimentation, it rarely powers production systems in embedded AI applications.
In addition, deploying AI software requires more than just writing Python scripts. Developers must navigate dependencies, version mismatches and performance optimizations tailored to the target hardware.
While Python libraries make development easier, achieving real-time inference or low-latency performance often necessitates re-implementing critical components in optimized languages like C++ or even assembly for certain accelerators. ONNX Runtime and TensorRT provide performance improvements for Python-based AI models, bridging some of the efficiency gaps without requiring full rewrites.
UNDERSTANDING THE BALANCE BETWEEN PROCESSING POWER, ENERGY CONSUMPTION AND COST IS CRUCIAL TO BUILDING A SUSTAINABLE AI-POWERED SOLUTION.
MYTH #3
Any hardware can run AI
The myth that "any hardware can run AI" is far from reality. The choice of hardware is deeply intertwined with the software requirements of AI.
High-performance AI algorithms demand specific hardware accelerators, compatibility with toolchains and memory capacity. Choosing mismatched hardware can result in performance bottlenecks or even an inability to deploy your AI model.
For example, deploying deep learning models on edge devices requires selecting chipsets with AI accelerators like GPUs, TPUs or NPUs. Even with the right hardware, software compatibility issues can arise, requiring specialized drivers and optimization techniques.
Understanding the balance between processing power, energy consumption, and cost is crucial to building a sustainable AI-powered solution. While AI is now being optimized for TinyML applications that run on microcontrollers, these models are significantly scaled down, requiring frameworks like TensorFlow Lite for Microcontrollers for deployment.
MYTH #4
AI is quick to implement
AI frameworks like TensorFlow or PyTorch are powerful, but they don’t eliminate the steep learning curve or the complexity of real-world applications. If it’s your first AI project, expect delays.
Beyond the framework itself, one of the biggest challenges is creating a toolchain that integrates one of these frameworks with the IDE for your chosen hardware platform. Ensuring compatibility, optimizing models for edge devices, integrating with legacy systems and meeting market-specific requirements all add to the complexity.
For applications outside the smartphone or consumer tech domain, the lack of pre-existing solutions further increases development effort.

MYTH #5
Any OS can run AI
Operating system choice matters more than you think. Certain AI platforms work best with specific distributions and can face compatibility issues with others.
The myth that "any OS will do" ignores the complexity of kernel configurations, driver support and runtime environments. To avoid costly rework or hardware underutilization, ensure your OS aligns with both your hardware and AI software stack.
Additionally, real-time AI applications, such as those in automotive or industrial automation, often require an OS with real-time capabilities. This means selecting an OS that supports deterministic execution, low-latency processing, and security hardening.
Developers must carefully evaluate the trade-offs between flexibility, support, and performance when choosing an OS for AI deployment. Some AI accelerators require specific OS support.
WHAT'S NEXT FOR AI AT THE EDGE?
We’re already seeing large language models (LLMs) give way to small language models (SLMs) in constrained devices, putting the power of generative AI into smaller products. If this is the direction you’re going, talk to the experts at Witekio.
We can help you get there.
ABOUT THE AUTHOR

Phil Robinson
General Manager, Witekio
Phil Robinson is Witekio’s general manager after having served Witekio in positions of increasing responsibility since 2018. Witekio is an Avnet company and is a leading provider of chip-to-cloud embedded software services for device makers, covering development services from embedded systems consulting, embedded development services (BSPs, Android, Linux, RTOS, Firmware, OTA and more), C++ app development services (Qt, Flutter) to connectivity, security and beyond.