Deep learning gives us the power to interpret, classify, cluster or predict anything that humans can sense and that our technology can digitize. With deep learning, we are basically giving society the ability to behave much more intelligently, by accurately interpreting what’s happening in the world around us with software. Here are some challenges in Kaggle where you can use deep learning to derive meaning out of images, video, sound, text and so on.
The goal in this competition is to take an image of a handwritten single digit, and determine what that digit is. The data for this competition were taken from the MNIST dataset. The MNIST (“Modified National Institute of Standards and Technology”) dataset is a classic within the Machine Learning community that has been extensively studied. More detail about the dataset, including Machine Learning algorithms that have been tried on it and their levels of success, can be found here.
With fewer than 500 North Atlantic right whales left in the world’s oceans, knowing the health and status of each whale is integral to the efforts of researchers working to protect the species from extinction.
To track and monitor the population, right whales are photographed during aerial surveys and then manually matched to an online photo-identification catalog. Customized software has been developed to aid in this process (DIGITS), but this still requires manual inspection of the potential comparisons that involves lag time for those images to be incorporated into the database.
This competition challenges you to automate the right whale recognition process using a dataset of aerial photographs of individual whales. Automating the identification of right whales would allow researchers to better focus on their conservation efforts.
You will never need a remote controller anymore, you will never need a light switch. Lying in bed in the dark, you will point to the ceiling to turn on the light, you will wave your hand to increase the temperature, you will make a T with your hands to turn on the TV set. You and your loved ones will feel safer at home, in parking lots, in airports: nobody will be watching, but computers will detect distressed people and suspicious activities. Computers will teach you how to effectively use gestures to enhance speech, to communicate with people who do not speak your language, to speak with deaf people, and you will easily learn many other sign languages to comminicate under water, to referee sports, etc. All that thanks to gesture recognition!
Currently, there are no realistic, affordable, or low-risk options for neurologically disabled patients to directly control external prosthetics with their brain activity.
Recorded from the human scalp, EEG signals are evoked by brain activity. The relationship between brain activity and EEG signals is complex and poorly understood outside of specific laboratory tests. Providing affordable, low-risk, non-invasive BCI devices is dependent on further advancements in interpreting EEG signals.
This competition challenges you to identify when a hand is grasping, lifting, and replacing an object using EEG data that was taken from healthy subjects as they performed these activities.
Identify which of 87 classes of birds and amphibians are present into 1000 continuous wild sound recordings
The Neural Information Processing Scaled for Bioacoustics (NIPS4B) bird song competition asks participants to identify which of 87 sound classes of birds and their ecosystem are present in 1000 continuous wild recordings from different places in Provence, France. The data is provided by the BIOTOPE society, which maintains the largest collection of wild recordings of birds in Europe.
As humans think, we produce brain waves. These brain waves can be mapped to actual intentions. In this competition, you are given the brain wave data of people with the goal of spelling a word by only paying attention to visual stimuli. The goal of the competition is to detect errors during the spelling task, given the subject’s brain waves.
This getting-started competition provides a benchmark data set and an R tutorial to get you going on analysing face images.
The objective of this task is to predict keypoint positions on face images. This can be used as a building block in several applications, such as:
Tracking faces in images and video
Analysing facial expressions
Detecting dysmorphic facial signs for medical diagnosis
Biometrics / face recognition
This competition asks participants to identify which of 35 species of birds are present in continuous recordings taken at three different locations. The data is provided by the Muséum national d’Histoire naturelle, one of the most respected bird survey institutions in the world.
When developing new medicines it is important to identify molecules that are highly active toward their intended targets but not toward other targets that might cause side effects. The objective of this competition is to identify the best statistical techniques for predicting biological activities of different molecules, both on- and off-target, given numerical descriptors generated from their chemical structures.