Artificial Intelligence- Revolution of Science

spyrokp@gmail.com Avatar
\"\"

What is AI?

Artificial intelligence (AI) which refers to any human-like behaviour

displayed by a machine system or software. In AI is

most basic form of computers are programmed to same as human behaviour using extensive data from past examples of similar behaviour. This can range from recognizing differences between a cat and a dog to performing any complex activities in a manufacturing facility.

A brief history of artificial intelligence

Before 1949, computers could execute commands or programs, but they could not

remember what they

done as they were not able to store these commands and programs. In 1950, Alan Turing

discussed how to build intelligent machines and test the intelligence in his paper “Computing Machinery and Intelligence.” Five years later, the first AI program was

presented at the Dartmouth Summer Research Project on Artificial Intelligence (DSPRAI). This event catalysed AI research for the next few eras.

Computers became faster, cheaper, and more accessible between 1957 to 1974. Machine learning algorithms

improved and, in 1970s, one of the hosts of DSPRAI

told Life Magazine that there would be a machine with the general intelligence of an average human being in three to eight years. Despite their success, computers’ inability to efficiently store or quickly process information created obstacles in the pursuit of artificial intelligence for the next ten years.

AI was revived

in the 1980’s with the expansion of the algorithmic toolkit and more dedicated funds. John Hopefield and David Rinehart` introduced “Deep Learning” techniques that allowed computers

to learn through experience. Edward Feigenbaum introduced “Expert systems” that copy human decision-making. Despite a lack of government funding and public hype, AI thrived and many landmark goals were

achieved in the next two decades. In 1997, reigning chess World Champion and Grandmaster Gary Kasparov was

defeated by IBM’s Deep Blue, a chess-playing computer program. The same year, speech recognition software developed by Dragon Systems was implemented on Windows. Cynthia Breazeal also

developed Kismet, a robot who could recognize and display emotions.

Machine Learning

A computer “learns” when its software is

able to successfully predict and react to unfolding scenarios based on previous outcomes. Machine learning refers to the process by which computers develop pattern recognition, or the ability to continuously learn from and make predictions based on data, and can make adjustments without being specifically programmed to do so. A form of artificial intelligence, machine learning effectively automates the process of analytical model-building and allows machines to adapt to new scenarios independently.

The four steps for building a machine learning model are:
1. Select and prepare a training data set necessary to solving the problem. This data can be labelled or unlabelled.
2. Choose an algorithm to run on the training data.

  • If the data is
  • labelled, the algorithm could be regression, decision trees, or instance-based.
  • If the data is
  • unlabelled, the algorithm could be a clustering algorithm, an association algorithm, or a neural network.

3. Train the algorithm to create the model.
4. Use and improve the model.

There are three methods of machine learning: “Supervised” learning works with

labelled data and requires less training. “Unsupervised” learning is used to classify unlabelled data by identifying patterns and relationships. “Semi-supervised” learning uses a small labelled data set to guide classification of a larger unlabelled data set.

Deep Learning

Deep Learning is

a subset of machine learning that has demonstrated significantly superior performance to some traditional machine learning approaches. Deep learning utilizes a combination of multi-layer artificial neural networks and data- and compute-intensive training, inspired by our latest understanding of human brain behaviour. This approach has

become so effective it’s even begun to surpass human abilities in many areas, such as image and speech recognition and natural language processing.

Deep learning models process large amounts of data and are typically unsupervised or semi-supervised.

Modern applications for AI

AI has the unique ability to extract meaning from data when you can define what the answer looks like but not how to get there. AI can amplify human capabilities and turn exponentially growing data into insight, action, and value.

Today, AI is

used in a variety of applications across industries, including healthcare, manufacturing, and government. Here are a few specific use cases:

  • Prescriptive maintenance and quality control improves production, manufacturing, and retail through an open framework for IT/ OT. Integrated solutions prescribe best maintenance decisions, automate actions, and enhance quality control processes by implementing enterprise AI-based computer vision techniques.
  • Speech and language processing transforms unstructured audio data into insight and intelligence. It automates the understanding of spoken and written language with machines using natural language processing, speech-to-text analytics, biometric search, or live call monitoring.
  • Video analytics and surveillance automatically analyses video to detect events, uncover identity, environment, and people, and obtain operational insights. It uses edge-to-core video analytics systems for a wide variety of workload and operating conditions.
  • Highly autonomous driving is built on a scale-out data ingestion platform to enable developers to build the optimum highly-autonomous driving solution tuned for opensource services, machine learning, and deep learning neural networks.
  • AI Chatbot: There is many applications like Chatbots, which is same as like virtual assistant. It can work about 100 humans can. And it is very helpful in many private sectors like: Restaurants, Hospitals, Travels etc.

AI Revolutionary in Science

Big data has met its match. In field after field, the ability to collect data has exploded—in biology, math, science with its burgeoning databases of genomes and proteins; in astronomy, with the petabytes flowing from sky surveys; in social science, tapping millions of posts and tweets that

ricochet around the internet. The flood of data can overwhelm human insight and analysis, but the computing advances that helped deliver it have also conjured powerful new tools for making sense of it all.

In a revolution that extends across much of science,

researchers are unleashing artificial intelligence (AI), often in the form of artificial neural networks, on the data torrents. Unlike earlier attempts at AI, such “Deep learning” systems don’t need to be programmed with a human expert’s knowledge. Instead, they learn on their own, often from large training data sets, until they can see patterns and spot anomalies in data sets that are far larger and messier than human beings can cope with. 

Unlike a graduate student or a post doc, however, neural networks can’t explain their thinking: The computations that lead to an outcome are hidden. So, their rise has

spawned a field some call “AI Neuroscience”: an effort to open up the black box of neural networks, building confidence in the insights that they yield.