Demystification of Artificial Intelligence


By Marina Pashkevich, VP Advanced Analytics & Artificial Intelligence, Oshkosh Corporation

In recent years, there has been a surge in interest in artificial intelligence (AI) through various types of news. There are various articles that talk about the significant amount of investment in new technology. The press is discussing computers that are capable of outsmarting professionals, killing many jobs, and threatening the survival of humankind.

 Is this reality or myth? What is AI and where can we utilize this technology?

The Wikipedia definition of AI: “Artificial intelligence is the intelligence of machines or software, as opposed to the intelligence of human beings or animals.” 

In other words, AI can imitate tasks that would normally require human intelligence. For instance, tasks such as visual perception, speech recognition, decision-making in uncertainty, learning and translation between languages.

The History of AI

The concept of AI is not new.  In the 1950s, researchers started working on human intelligence simulation that showed computers able to accomplish a few tasks such as solving calculus problems, responding to commands by planning and performing physical actions—even impersonating a psychotherapist and composing music. But the limitations of computing power have blocked attempts to tackle various issues.

AI technical work continued with a lower profile in the 1990s. Techniques such as neural networks and genetic algorithms received fresh attention, partly because they avoided some of the limitations of expert systems and partly because new algorithms made them more effective. The design of neural networks takes its inspiration from the structure of the brain. However, the outcome of the neural net was difficult to explain.

In the late 2000s, AI progress was renewed by several factors, such as the big data phenomenon, mobile devices, low-cost sensors, the internet, cloud computing, and computing power. The work on AI approaches that these components have fueled demand for large datasets.

Realizing that artificial intelligence as a technology is essential, not a magic pill that solves all problems. Sometimes artificial intelligence may not be the answer to a particular problem.

The enhancement of data scientists’ work and technologies was aided by the availability of new algorithms on an open-source basis.  Improved neural network algorithms have been developed that have considerably improved machine learning performance and have become catalysts for other technologies. The capabilities of individual technologies are improving in performing tasks that were only possible to do by humans.

These cognitive technologies are what we refer to as.

The examples of cognitive technologies are: 

Natural language processing – refers to the ability of machines to work with text the way humans do.

Robotics – the ability to replicate or substitute human actions by combining cognitive technology and sensors.

Speech recognition – the ability to interact with the computer using natural transcription language data.

The following are examples of cognitive technologies: 

  • The ability of machines to work with text the way humans do is known as natural language processing.
  • Robotics – the ability to duplicate or substitute human actions using a combination of cognitive technology and sensors.
  • The ability to interact with computers using natural transcription language data is referred to as speech recognition.
  • AI has several characteristics that make it much smarter than your average coffee machine, such as its ability to learn, reason, and solve problems.

How is AI utilized today:

The consumer industry learns about customers’ preferences in real-time to create customized offers.

• Instagram is using AI to target advertising, remove spam and improve customer experience.

Chatbots are capable of answering questions and providing relevant content for common queries.

AI is used by autonomous vehicles and drones to provide delivery and surveillance services.

AI provides suggestions for music and media streaming services based on your preferences and past usage of the services.

AI algorithms play a significant role in our everyday lives by enabling us to be more productive and focus on real-world problems.

Limitations and use of AI within the organization

AI is not a panacea; it requires a journey and relevant experience to make it happen.

To capture the complexity of the real-world use case, AI models must be trained on real data.

Identifying areas where AI can provide the most value is crucial for organizations to maximize its value. To address pain points with AI, it is necessary to have an understanding of the scope of the use case, business processes, data flows, and pain points.

After identifying potential use cases, investing in the technology and skills necessary to leverage AI’s potential is crucial.

To operate efficiently, AI requires vast amounts of data and computational power, so investment in these areas is essential. The construction of the necessary infrastructure, data storage and processing systems, and high-performance computing clusters is part of this.

To build trust in the data used, data quality is a crucial aspect.

AI investment is not a one-time event, but a continual process of improvement and refinement. As new information becomes available and business processes change, AI models must be updated and optimized to maintain accuracy and relevancy. To do so, we must continually invest in data management, data quality and model training.

Takeaway

Realizing that artificial intelligence as a technology is essential, not a magic pill that solves all problems. Sometimes artificial intelligence may not be the answer to a particular problem.

Before beginning work on a new use case, it is important to consult the list of requirements and understand the scope and outcome.

A neural network that was once forgotten but is now modernized is the basis of most AI algorithms.

Artificial intelligence is shaping the future of humanity in almost every industry. It is already the main driver of emerging technologies like big data, robotics and IoT — not to mention generative AI, with tools like ChatGPT and AI art generators garnering mainstream attention — and it will continue to act as a technology innovator for the foreseeable future. Nevertheless, humans will continue to manage AI activities.