Iken Personics‘ flagship product, Mooga, brings with it, an entirely new business paradigm through holistic personalization that will soon make existing technologies redundant.
Mooga is a state-of-the-art customer analytics platform that provides real-time, contextual, operational and automated intelligence using a combination of hybrid AI (artificial intelligence) along with other filtering techniques to enable superior end user experience.
Our platform is designed with loosely coupled services, such that it integrates and quickly adapts your business needs, hence enabling automated & intelligent recommendation.
Audio Analytics comprises Audio Fingerprinting & Audio Similarity. Audio Fingerprinting is analyzing the audio signal in terms of its perceptual characteristics/features. These characteristics typically are related to the instrumentation, rhythmic structure and harmonic content of the music.
A digital summary of an audio clip is prepared to identify and capture the basic texture of an audio clip by Feature extraction which is a process of computing a compact numerical representation that can be used to characterize a segment of an audio.
Over the digital summary of each song, machine learning algorithms are used to calculate similarity value and fetch songs which are acoustically similar to the problem case.
Song Similarity: Similar Songs are identified by determining similar acoustic patterns using state-of-the art machine learning algorithms and filtering techniques. Thereby, identifying songs which have similar instrumentation or similarly sounding dominant instruments such as lead vocals, saxophones, violins etc.
Genre Classification: Based on the texture of the song which primarily is a function of beat, timbre and intensity, songs are classified into Genres such as Pop, Rock, Classical etc. This process of classifying a genre is a combination of primary research as well as machine learning.
Artist Similarity: Artists similar to a particular artist are determined based upon vocals and acoustic patterns in songs. This provides the user with other artists or songs that are similar to the one they generally listen to.
Music & Speech Classification: Using advanced machine learning algorithms over acoustic features and characteristics, audio streams are segmented into music and speech (dialogues, name tunes, prank etc.,) with high accuracy levels.
Playlist Tagging: Playlists are generated automatically by tagging similar songs based on user‘s favorite list, mood based list or time based playlist. This process helps users by reducing the effort required to create ‘good‘ playlists and provides an interesting new way of listening to the contents of a media library.
Metadata Enrichment: The more descriptive information a song has attached to it, the faster it will be discovered. With state-of-the-art audio analysis, metadata such as Genre is identified and Acoustic features are classified into explicit labels and songs are tagged based on such labels.