Companies all across the world are utilizing automation technologies powered by AI to make things operate more smoothly, be more adaptable, and come up with new ideas. But they are also having a lot of trouble. McKinsey’s 2024 Global Survey on the State of AI says that 88% of businesses are trying AI, but just 30% claim they have noticed substantial improvements in their expenses or revenues thus far. The Boston Consulting Group (BCG) states that 74% of businesses have problems getting AI to perform on a wider scale than pilot programs.
This article talks about the seven major challenges that businesses run into when they try to employ AI to do things on its own. It also tells you how to correct problems, which is great.
1. The quality and availability of data
Problem Summary: AI models need a lot of well-organized, high-quality data. Still, 45% of organizations say they are worried about bias and data veracity, and 42% say they don’t have enough proprietary data to make models perform successfully for them.
- Data silos that are broken up: Many businesses maintain their data in multiple aging systems, which makes it hard to clean and merge.
- Inconsistent Formats: It’s impossible to mix data from diverse functions, including sales, HR, and operations, when there aren’t any common data schemas. This can make fields not line up and lose insights.
- Risks of Bias and Compliance: Poor data governance can lead to unfairness and break privacy regulations like GDPR or CCPA.
How to make the effect less strong:
- Data Auditing and Governance Frameworks: Use solutions like Collibra and Informatica to keep an eye on, check, and track data pipelines. Get data stewards from different departments to work together.
- Synthetic and augmented data: Use GANs and other methods to add to tiny datasets so that models can learn from data that is comparable to what they will see in the real world.
- Continuous Data Quality Monitoring: Use automatic data quality dashboards to show problems (such missing numbers and outliers) as they happen so they can be corrected right away.
2. There aren’t enough skilled professionals and talented people.
A brief look at the problem: It’s still challenging to find AI personnel who are qualified. Gartner predicts that 60% of AI initiatives fail because not enough people know about data science, dev-ops, and AI ethics.
- Many organizations don’t have enough workers because big tech corporations and emerging businesses are battling over AI engineers.
- AI needs more than just data scientists to work. It needs people who know a lot about more than one thing. You also need UX designers, change managers, and people who know a lot about the topic.
- growing Skill Sets: Because AI is growing so quickly, teams need to continually learning new skills in areas like MLOps, prompt engineering, and generative AI.
Ways to make things less bad:
- Internal Upskilling Programs: Use sites like Coursera or Udacity to make your own learning paths that mix online lessons with short projects you undertake in real life.
- AI Centers of Excellence (CoE): Set up a CoE to bring together AI professionals and give business units advice, best practices, and mentoring.
- Schools and businesses should work together to assist graduates find jobs and offer internships that concentrate on AI ethics and automation.
3. Problems with scaling and putting things together
BCG says that 75% of executives think AI is a major priority, but just 5% of them actually do something about it. This highlights how hard it is to get from “bolted-on” pilot solutions to solutions that function for the complete firm.
- Old ERP or CRM systems could not operate well with AI since they don’t have the APIs or new microservices architectures that AI needs.
- Technology Sprawl: Having too many point solutions, such as chatbots, RPA bots, and vision systems, can make things harder to manage and make the experiences of consumers less consistent.
- Ability to grow Bottlenecks: Models that operate well in a controlled context typically can’t handle the volume of data or number of users that are present in production.
How to make it less bad:
- Use containerization and MLOps pipelines: Use tools like Kubernetes and Kubeflow to handle scaling, version control, and model deployment.
- API-First Architecture: Break AI components into services with clear REST or gRPC interfaces so that other apps can access them.
- Phased Rollouts and Pilots: Start with low-risk, high-conditions use cases like customer service triage to make sure the integration patterns function best before distributing to additional people.
4. How to handle change and those in the company that don’t want to change
Some people don’t want to employ AI-based methods: According to Gartner, 20% of workers believe that AI has made their jobs harder. A lot of folks are still frightened and scared about losing their employment because of this.
- Cultural Barriers: People who are used to completing things by hand might not trust AI decisions that are “black-boxed.”
- Unrealistic Expectations from Executives: Leaders typically seek substantial, quick improvements without having to invest for training or changing systems.
- People might not use AI as much if they don’t know what it is.
Ways to make the effects less severe:
- Open Stakeholder Engagement: Hold town halls to show off what AI can accomplish, explain how it works, and answer concerns from workers.
- Co-designing workflows: When you develop processes that incorporate AI, get feedback from the people who will use them to make sure they are useful and relevant.
- Change Champions and Incentives: Give teams that do successfully with AI solutions awards to keep them continuing.
5. Things to keep in mind when it comes to obeying the law, your morality, and the rules of society
AI is becoming more and more important in business, and firms are under more and more pressure to make sure it is fair, open, and accountable. Sonatype thinks that the only way to gain trust is to make sure that AI is used in a responsible way and obey the guidelines.
- You could break the law and ruin your reputation if you don’t review your algorithms.
- Not knowing the rules: It’s hard to know what to do when new legislation come out, like the U.S. Executive Orders and the EU AI Act.
- For instance, users and regulators frequently want AI to be open about how it derived its answers.
How to make things better:
- AI Ethics Committees: Bring professionals from different departments together to look at use cases and fairness norms and provide the go-ahead for deployments.
- Check your models for bias, explainability, and strength via third-party auditing tools, such as IBM AI Fairness 360.
- Regulatory Monitoring: Make sure that the persons who are in charge of following the regulations are keeping an eye on changes in the legislation and making sure that policies are up to current.
6. Risks to safety, privacy, and intellectual property
McKinsey believes that fears about privacy, IP theft, and mistakes are growing, even while teams are getting better at lowering risk.
- Attacks that are hostile: If protections aren’t robust enough, attackers can modify inputs to make models operate poorly or leak vital training data.
- Data Privacy Breaches: AI systems that deal with private or personal information could leak PII if they aren’t encrypted or if just a few people can access them.
- There are more and more concerns with copyright, licenses, and works that are based on other works as AI makes more and more stuff.
Things you can do to improve things:
- Threat modeling, code signing, and vulnerability scanning are some of the things you can do to keep your MLOps job safe.
- Federated Learning and Encryption: You can use federated learning or homomorphic encryption to train models on data that isn’t all in one location and doesn’t expose raw inputs.
- Clear IP Policies: Your contract should say who owns the AI outputs and who can use them. This will keep you from getting in trouble with the law.
7. It’s impossible to tell how much a firm can make and how much it will cost
Only 26% of businesses that spend a lot of money on AI find real benefits from automating their processes.
- Unclear Metrics: Projects don’t know what their goals are if they don’t have clear key performance indicators (KPIs), such as reducing cycle time, producing money, or lowering the mistake rate.
- Short-run Focus: If you worry too much about getting things done quickly, AI might not be able to develop plans for the long run.
- Getting the money: For instance, if you don’t know how delighted your consumers will be, it’s hard to get the money.
Ways to make it less bad:
- Value-Driven Roadmaps: At the beginning of a project, define goals for success, such as cutting the time it takes to process bills by 20%. After then, use business dashboards to keep track of these goals.
- Balanced Scorecards: Use both quantitative and qualitative KPIs, including the innovation index and employee satisfaction, to get a complete view of value.
- Stage-gate financing is one type of iterative funding approach. Each funding tranche is linked to the validated ROI from the previous rounds.
Finally
You need to have effective data strategy, skilled people, up-to-date technology, and strong governance in order to leverage AI-powered automation. Organizations can progress from small-scale trials to full-scale transformation by proactively dealing with these seven issues: data preparation, talent shortages, integration hurdles, change management, ethical compliance, security threats, and ROI measurement.
Companies may not only reduce risks, but they can also get the most out of AI by being honest about what they do, always learning new skills, and focusing on value. This will help you work more efficiently in the long run, offer you an edge over your competitors, and earn the trust of your stakeholders.
Questions that are often asked (FAQs)
1. What does it mean for AI to run automation? Automated jobs that used to be done by people are now done by AI and machine learning algorithms, including natural language processing or computer vision. This speeds up and makes the tasks more accurate.
2. How long does it take to set up AI automation? How long it takes to set up an RPA bot depends on how intricate it is. Setting up simple RPA bots only takes a few weeks, but setting up enterprise-level AI systems that need data readiness, integration, and governance can take 6 to 18 months.
3. How can small and medium-sized firms (SMEs) apply AI automation if they don’t have a lot of money? Azure Cognitive Services is an example of an AI service that runs on the cloud. Pay-as-you-go pricing can help you keep your initial expenditure low while you focus on use cases that will have a major effect.
4. How do you make sure that AI solutions don’t provide one group an unfair advantage over another? Use fairness criteria like equalized odds and demographic parity. Do regular bias audits, and make sure your datasets are varied and contain all user groups.
5. What should I look at to find out how much AI is worth? It’s crucial to look at how long it takes to perform a task, how many errors were fixed, how much money was saved, how much income went up, and how happy people are. Mix numbers with comments from people who care about the outcome to get the whole story.
References
- McKinsey & Company. “AI Adoption in 2024: 74% of Companies Struggle to Achieve and Scale Value.” BCG Press, October 24, 2024. https://www.bcg.com/press/24october2024-ai-adoption-in-2024-74-of-companies-struggle-to-achieve-and-scale-value
- Gartner. “AI Automation Poses Risks for Businesses, Gartner Finds.” B2B News, July 2024. https://b2bnews.co.nz/news/ai-automation-poses-risks-for-businesses-gartner-finds/
- IBM. “AI Adoption Challenges.” IBM THINK, February 2025. https://www.ibm.com/think/insights/ai-adoption-challenges
- Sonatype. “AI Impact on the Future of Automation and Ethics: Insights from Gartner Report.” Sonatype Blog, February 2025. https://www.sonatype.com/blog/ai-impact-on-the-future-of-automation-and-ethics-insights-from-gartner-report
- The Wall Street Journal. “Why Companies Are Already All-In on AI After Arriving Late to Everything Else.” WSJ, June 2025. https://www.wsj.com/articles/why-companies-are-already-all-in-on-ai-after-arriving-late-to-everything-else-357ad090
- Boston Consulting Group. Duranton, Sylvain. “The AI Mistake Companies Are Making — and How They Can Fix It.” Business Insider, May 2025. https://www.businessinsider.com/ai-mistake-companies-make-bcg-tech-executive-2025-5
- McKinsey & Company. “The State of AI: Global Survey.” McKinsey Insights, March 2025. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai