Amazon Rekognition provides two API sets; Rekognition Image for analyzing images and Rekognition Video for analyzing videos.
Both API perform detection and recognition analysis of images and videos to provide insights you can use in your applications. For example, you could use Rekognition Image to enhance the customer experience for a photo management application. When a customer uploads a photo, your application can use Rekognition Image to detect real-world objects or faces in the image. After your application stores the information returned from Rekognition Image, the user could then query their photo collection for photos with a specific object or face. Deeper querying is possible. For example, the user could query for faces that are smiling or query for faces that are a certain age.
For video applications, You could use Rekognition Video to create a surveilance application. Rekognition Video can track where a person is detected throughout a stored video. Alternatively, you can use Rekognition Video to search a streaming video for persons whose facial descriptions match facial descriptions already stored by Amazon Rekognition.
The Amazon Rekognition API makes deep learning image analysis easy to use. For example,RecognizeCelebrities returns information for up to 100 celebrities detected in an image. This includes information about where celebrity faces are detected on the image and where to get further information about the celebrity.
The following information covers the types of analysis that Amazon Rekognition provides and an overview of Rekognition Image and Rekognition Video operations. Also covered is the difference between non-storage and storage operations.
Types of Detection and Recognition
The following are the types of detection and recognition that the Rekognition Image API and Rekognition Video API can perform.
A label refers to any of the following: objects (for example, flower, tree, or table), events (for example, a wedding, graduation, or birthday party), concepts (for example, a landscape, evening, and nature) or activities (for example, getting out of a car). Amazon Rekognition can detect labels in images and videos. However activities are not detected in images.
To detect labels in images, use DetectLabels. To detect labels in stored videos, use StartLabelDetection.
Amazon Rekognition can detect faces in images and stored videos. With Amazon Rekognition you can get information about where faces are detected in an image or video, facial landmarks such as the position of eyes, and detected emotions such as happy or sad. You can also compare a face in an image with faces detected in another image. Information about faces can also be stored for later retrieval.
To detect faces in images, use DetectFaces. To detect faces in stored videos, use StartFaceDetection.
Amazon Rekognition can search for faces. Facial information is indexed into a container known as a collection. Face information in the collection can then be matched with faces detected in images, stored videos, and streaming video. For more information, Searching Faces in a Collection.
To search for known faces in images, use DetectFaces. To search for known faces in stored videos, use StartFaceDetection. To search for known faces in streaming videos, use CreateStreamProcessor.
Amazon Rekognition can track persons in a stored video. Rekognition Video provides tracking, face details, and in-frame location information for persons detected in a video. Persons cannot be detected in images.
To detect persons in stored videos, use StartPersonTracking.
Amazon Rekognition can recognize thousand of celebrities in images and stored videos. You can get information about where a celebrity’s face is located on an image, facial landmarks and the pose of a celebrity’s face. You can get tracking information for celebrities as the appear throughout a stored video.
To recognize celebrities in images, use RecognizeCelebrities. To recognize celebrities in stored videos, use StartCelebrityRecognition.
Amazon Rekognition can analyse images and stored videos for explicit or suggestive adult content.
To detect unsage images, use DetectModerationLabels. To detect unsafe stored videos, useStartContentModeration.