<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=2634489&amp;fmt=gif">

AI and Machine Learning, Getting Started with Google Cloud

Deep Learning vs. Machine Learning and the Origins of Artificial Intelligence

By Sabina Bhasin | February 12, 2021

This was written by Leah Zitter, P.h.D. You can connect with her on the platform,  @Leah Zitter  .


Human imagination desires to create machines that can reason, decide, reflect, and predict as they do. 

Two world wars passed before the first weak AI appeared. They perform a single task or set of closely related jobs, like, Amazon Alexa and Google Search technology. Strong AIs are robots that strategize, predict, and effortlessly handle complex tasks, like driverless cars. Scientists hope to develop an even more advanced AI with ambitions, emotions, social skills, and a conscience.

To produce these smart machines, scientists train algorithms to cogitate through various ML methods. There are areas, however, where the "brains" of machines are simply blocked. No amount of data, for example, could teach them to extract more intricate features in sound, images, or texts.


But, Deep Learning (DL) technology - an ML method that utilizes simulated human neural reasoning and that force-feeds the data directly into that algorithmic model - can conduct more complex tasks. Think of DL as a more sophisticated sub-strata of ML; so sophisticated, in fact, that MIT is using it to predict our future.


A Brief History of AI


The first-released AI, a computer called ENIAC, filled a 50-foot-long room and weighed more than 60 grizzly bears, and came out six months after World War II ended. The programmable computer excited the press because it performed 20,000 multiplications per second, but by the 1970s, it shrunk compared to devices far headier. By that time, the Association for the Advancement of Artificial Intelligence (AAAI) and several AI companies formed, most notably Teknowledge and Intellicorp. 

The first few years of AI were rough. The period of "old-fashioned AI" consisted of primitive translating machines using "machine-aided cognition" for their feats. 

Then, in 1965, American cognitive psychologist Herbert Simon proclaimed that machines would be capable "of doing any work a man can do" in just 20 years. Still, it took longer than that for software programs like Dendral MYCIN and XCON to jump-start the industry.

Modern AI's era came in the summer of 1997, when two Stanford University graduates launched a web-page search engine called Google from their garage in Menlo Park, California. And the rest is history.

As of today, AI has spawned various subfields, each dealing with its specialty. These include:

  1. Speech recognition that teaches robots transcription

  2. Computer vision that leads robots to see.

  3. Natural language processing trains robots to write.

  4. Robotics teaches robots to understand, manipulate, and move around their environments.

Today, the foundation of modern AI development is generally referred to as "machines that respond to stimulation consistent with traditional responses from humans, given the human capacity for contemplation, judgment, and intention," according to the Brookings Institute


What Is ML?


AI is taught to analyze, predict, and advise on possible outcomes.


The most common methods that data scientists use are supervised and unsupervised learning or a blend of both. In the first, researchers feed troves of labeled inputs into machine models, teaching its algorithms to recognize specific objects in real life. In unsupervised learning, researchers "pace" the model through batches of unlabelled data, so the machine begins to identify patterns independently. 


What Is Deep Learning?


Deep learning (DL) is a subtopic of ML, where the machine learns features and tasks directly from the data. (The data can be images, texts, or sound). Deep learning methods are far more complex and expensive than ML but provide immeasurably more accurate results. In contrast to ML, too, most DL methods use neural network architectures to process their results.


The most common DL neural networks include:

  • Artificial neural networks model how our brains analyze and process information, using that same simulation of neurons, synapses, nodes, and neural firing.  

  • Convolutional neural networks (ConvNets or CNNs) specialize in processing data that's in a grid-like form. Each neuron works in its receptive field and covers the entire visual field when connected to other neurons. 

  • Recurrent neural networks have a high-pitched internal memory, gained by looping back and forth in their neural network, helping them remember massive amounts of information and allowing them to be precise in predicting upcoming features or events. 


Differences Between DL and ML


The ML training process differs significantly from DL. In ML, the human plays around with different learning styles and models and tests blends of different classifiers and features to determine which arrangement works best for the training set. With DL, on the other hand, scientists feed the data into the machine, and it's the machine that makes a choice.


While ML is cheaper, faster, and has a more straightforward debugging process, DL is incomparably more reliable and far more sophisticated than ML. Its high-performing GPUs, thriving on massive batches of labeled data, can pick up the very subtle nuances of images that ML programs overlook.


When it comes to applications, ML gives us innovations that include online fraud detection, email spam, malware filtering, and product recommendations. DL's sophistication has led to innovations like self-driving cars, face recognition technology, machine translation, and traffic prediction.


The Future of AI and DL


The future of DL-driven AI resembles our wildest sci-fi plots. 

Futurist Ray Kurzweil "the best person to predict the future of artificial intelligence. -

According to Kurzweil, we'll be able to upload our consciousness to computers by the 2030s. Robot intelligence will be billions of times more potent than human intelligence by the 2040s. And by 2045, we will achieve suprahuman intelligence by uploading our neocortex to a synthetic neocortex in the cloud. 

The limit does not exist. 


Let's Connect!


Leah Zitter, Ph.D., has a master's in philosophy, epistemology, and logic and a Ph.D. in research psychology.

There is so much more to discuss, so connect with us and share your Google Cloud story. You might get featured in the next installment! Get in touch with Content Manager Sabina Bhasin at sabina.bhasin@c2cgobal.com if you're interested.

Rather chat with your peers? Join our C2C Connect chat rooms! 


Recent Articles

Google Cloud Strategy

AI Cheat Sheet

AI is no more and no less the drive to create robots with human minds so they can do everything we do and more. Use this cheat sheet to help decode the space.
By Leah Zitter
AI and Machine Learning

CarCast with Bruno Aziza: What Makes Us Better Than AI?!

Unlock the secrets of human cognitive superiority over AI in this compelling CarCast with Bruno Aziza and Kenneth Cukier.
By Bruno Aziza
AI and Machine Learning

CarCast with Bruno Aziza: The Metrics You CAN'T Afford To ...

Discover essential CEO metrics: Rule of 40, CAC Ratio, NRR/GRR, and more. Optimize your business for success now!
By Bruno Aziza