Traditional machine learning uses handwritten feature extraction and modality-specific machine learning algorithms to label images or recognize voices. However, this method has several drawbacks in both time-to-solution and accuracy. Today’s advanced deep neural networks use algorithms, big data, and the computational power of the GPU to change this dynamic.
Deep learning is essentially an attempt to simulate the senses of sight and hearing, which implies that the applications are as many as there are specific uses for understanding sounds or images. Here are some use cases compiled from various sources. References are available at the end of this article.
1. Image Tagging– Facebook has millions of labeled photographs with which to train the algorithms that automatically tag friends in uploading images (see “Facebook Creates Software that Matches Faces Almost as Well as You Do”)
2. Identifying items in real time and help blind and visually impaired people recognise the physical world through their smartphone app (Aipoly, an iPhone app uses technology based on a convolutional neural network trained by Teradeep Deep Learning Software on a 10 million image dataset)
3. Recreating paintings from great masters– A group of researchers at the University of Tubingen, Germany, have developed an algorithm that can morph an image to resemble a painting in the style of the great masters. (A photograph of apartments by a river in Tubingen, Germany was processed to be stylistically similar to various paintings, including J.M. Turner’s “The Wreck of a Transport Ship,” Van Gogh’s “The Starry Night,” and Edvard Munch’s “The Scream.” Read more..
4. Drug discovery through medical imaging-based diagnosis using deep learning- (find out how Butterfly Network is building a device that will make medical imaging accessible to everyone in the world. It’s a breakthrough that will save millions of lives)
5. Optical Character Recognition which is scanning of images. It’s gaining traction lately to read an image and extract text out of it and correlate to the objects found on image
6. Identifying emotions from video or still images Affectiva is an MIT Media Lab spinoff that uses deep learning networks to identify emotions from video or still images.
7. Identifying company brands and logos in photos posted to social media to help tracking brand presence at events or locations, compare brand performance with competitors, and target advertising campaigns. Ditto Labs has built a detection system that uses deep learning for this purpose.
• Speech Recognition for voice based searches like Android OS, Siri or Cortana
• Analyzing conversations: identifying speakers, keywords, critical moments, and time-spent-talking, and deducing group take-aways from conference calls. For example Gridspace brings conversational awareness to communications.
• NLP is used by Google Translate on the phone. It recognizes text from images and able to translate it from/to 20 languages
• Processing text from human speeches- Natural Language Processing (NLP) is used heavily in language conversion in chat rooms.
• Dango is a floating assistant that runs on your phone and predicts emoji, stickers and GIFs based on what you and your friends are writing in any app. Suggesting emoji is hard: it means Dango needs to understand the meaning of what you’re writing in order to suggest emoji you might want to use. see here the engineering behind Dango
• Deep neural networks (DNN) trained on large transcriptional response data sets can classify various drugs to therapeutic categories solely based on their transcriptional profiles (here’s a paper on Deep learning applications for predicting pharmacological properties of drugs http://www.kurzweilai.net/deep-learning-applied-to-drug-discovery-and-repurposing)
• Institute of Bioinformatics found that deep Learning excelled in toxicity prediction and outperformed many other computational approaches like naive Bayes, support vector machines, and random forests.
• Understanding of diseases, genetic therapies: How both natural and therapeutic genetic variation changes cellular process such as DNA-to-RNA transcription, gene splicing etc. Deep Genomics applies deep learning to make predictions in these areas.
• Brand voice– Using deep learning to helps marketers and advertisers identify the right audiences to target on social platforms and suggest what they should say to customers based on the data (example Affinio http://www.affinio.com/blog/deep-learning-disrupting-social-marketing-and-advertising)
• Audience Segmentation: Segment social audiences across multiple platforms, based on an unsupervised identification and segmentation of naturally forming tribes, and extracting the unique features from these tribes.
• Content Recommendations: Once specific audience segments have been identified, recommend content (topics, phrases, wording, images, videos, etc) that brands should create based on what is being discussed, shared, published and reviewed within these communities.
• Ad targeting Using the combination of natural segmentation and content recommendation capabilities, to suggest who to target as well as keywords, images, etc to use in ad design.
• Sponsorships Deep learning can be applied to find out exactly how often the brand appears during the telecast of a sports event
i). Sentiment Analysis and Opinion Mining– Sentiment analysis, also called opinion mining, is the field of study that analyzes people’s opinions, sentiments, evaluations, appraisals, attitudes, and emotions towards entities such as products, services, organizations, individuals, issues, events, topics, and their attributes. It represents a large problem space. here’s an interesting paper on this topic
ii). Automatic music recommendation can be addressed through deep learning that has become an increasingly relevant problem in recent years, since a lot of music is now sold and consumed digitally.
iii). Subjectivity detection, word sentiment classification, document sentiment classification, and opinion extraction
Artificial intelligence is learning to play Atari games. The Atari addict is a deep-learning algorithm called DQN. The general-purpose DQN learning algorithm could be the first rung on a ladder to artificial intelligence.
• A continuous evolution of methods is a prime characteristic of internet frauds. The deep learning algorithms are able to analyze potentially tens of thousands of latent features (time signals, actors and geographic location are some easy examples) that might make up a particular type of fraud, and are even able to detect “sub modus operandi,” or different variants of the same scheme.
• Today’s car crash-avoidance systems and experimental driverless cars rely on radar and other sensors to detect pedestrians on the road. But a vision-based safety system has remained elusive in cars because computers typically face a tradeoff between analyzing video images quickly and drawing the right conclusions. A pedestrian detection system based on deep neural networks can perform in close to real-time based on visual cues alone and bring significant improvement to the existing approach.
• By bypassing the hardcoding of detection of specific features — such as lane markings, guardrails or other cars — and avoid creating a near infinite number of “if, then, else” statements, which is too impractical to code when trying to account for the randomness that occurs on the road DAVE2 created a robust system for driving on public roads
Malware has proven increasingly difficult to detect via signature or heuristic-based methods, which means most Antivirus (AV) programs are woefully ineffective against mutating malware, and especially ineffective against APT attacks (Advanced Persistent Threats). Typical malware consists of about 10,000 lines of code. Changing only 1% of the code renders most AV ineffective. Deep Instinct applies artificial intelligence Deep Learning algorithms to detect structures and program functions that are indicative of malware. Read more..
Building your own use cases: Nervana provides a deep learning framework called neon that allows you to build your own use-case around deep learning networks and a cloud service to import and analyze data using deep learning models.
The integration of sight and sound will affect the following fields among a myriad of others:
1. Medical applications – there are tremendous advances in robotic surgery that relies on extremely sensitive tactile equipment. However, if a doctor can advise a robot to “move a fraction of a millimeter to the left of the clavicle” they could potentially gain more control by directing the robot via full understood voice control.
2. Automotive – we are already seeing self driving cars; deep learning will possibly integrate into automated driving systems to detect and interpret sights and sounds that might be beyond the capacity of humans.
3. Military – drones are particularly well suited to deep learning.
4. Surveillance – here too drones will play a role, but the idea of computers that are able to sense and interpret with a human-like degree of accuracy will change the way in which surveillance is done.
The way deep learning is solving longstanding human problems by visualizing and grasping the things around us and reading them in context, it would be possible to apply these ideas into lot many areas in the days to come. With huge deep learning capabilities these problems can be addressed in the near future.
1. Conversational cues: you are part of a discussion on a topic you are not very much familiar with. You can just turn on your smartphone app power by deep learning and it will listen to the conversation and suggest ideas (or even facts) that you can contribute to the discussion.
2. Suggesting actions based on mood- your app might know you so well based on your recent conversation, messages, time of the day and recent developments in your life it might suggest you real life actions like watching movies, working out, calling a friend etc. more like a close friend.
So deep learning, working with other algorithms, can help you classify, cluster and predict. It does so by learning to read the signals, or structure, in data automatically. When deep learning algorithms train, they make guesses about the data, measure the error of their guesses against the training set, and then correct the way they make guesses in order to become more accurate. This is optimization.
Now imagine that, with deep learning, you can classify, cluster or predict anything you have data about: images, video, sound, text and DNA, time series (touch, stock markets, economic tables, the weather). That is, anything that humans can sense and that our technology can digitize. With deep learning, we are basically giving society the ability to behave much more intelligently, by accurately interpreting what’s happening in the world around us with software.