Audio Fingerprinting is fun. Back in 2018 while working on one of my side projects (Nadyy), I stumbled upon this nice library for audio fingerprinting (dejavu). I've liked how it was fairly easy to understand for such not-so-easy task, and the accuracy was good enough for me. So my decision was to start working on top of dejavu.
Ultimately, my objective was to build a mobile app that detect and give information about user audio recordings. Just like what Shazam app is doing for example. The tricky part is that audio fingerprinting will consume resources, so fingerprinting user submitted queries (their audio recording) at server side perhaps is not a wise decision. Moreover, modern mobile phones are now rocket fast in terms of audio analysis. Dejavu audio fingerprinting algorithm was written in python. So I had to port it to whatever language/technology that works on Android and iOS. I went ahead and wrote the port in C++ (open sourced).
The rest of this post is a long tutorial that will wrap-up the steps necessary (tested on Linux Ubuntu) to include this audio fingerprinting c++ function, to a react native project. Moreover, I've also open sourced a sample code with all those steps already done, with a suggested method to record raw audio.
Disclaimer: I have prepared this tutorial for someone who asked for it. Eventually, I've decided to publish it publicly instead of letting it buried in my PC.
npx react-native init mobileAfp
/Android/Sdk/ndk-bundle
(on Linux). RecAndFp
) inside /src/main/java/com/mobileafp/
folder. The implementation should follow standards of react-native (please check sample code or facebook tutorial for more info). We will have three main function that we should be able to call from JS:public void start_recording()
public void fp( Promise promise)
public void stop_recording()
RecandFp.java
(will call it RecAndFpPackage
). This file should implements createNativeModules
method and register our first java class.MainApplication.java
, and register it inside the method getPackages()
.start_recording()
function and test it out. You can call it like so:import {NativeModules} from "react-native";
NativeModules.RecAndFp.start_recording();
NativeModules.RecAndFp.stop_recording();
const result_string = await NativeModules.RecAndFp.fp()
; cpp
inside src/main/
fingerprint.cpp
. Again it will be at src/main/cpp/fingerprint.cpp.
CMakeLists.txt
file inside /cpp
folder. If you follow Google’s guide, Android studio can create it for you. However I'm providing one in the sample code. android/app/build.gradle
and include specific instructions inside the correct blocks (file provided in sample code, adjust paths accordingly if you ran into problems).public native String passingDataToJni(float[] audio_data, int array_length);
extern "C" JNIEXPORT jstring JNICALL
Java_com_mobileafp_RecAndFp_passingDataToJni(JNIEnv *env, jobject instance, jfloatArray floatarray, jint intValue) {}
RecAndFp.Java
file should be edited to include the c++ library:static {
System.loadLibrary("fingerprint");
}
android/opencv
.path_to_opencv_sdk/sdk/java/src
directory into path_to_your_project/opencv/src/main/java
”.aidl/org/opencv/engine
and move /java/org/opencv/engine/OpenCVEngineInterface.aidl
into it”.path_to_opencv_sdk/sdk/java/res
into path_to_your_project/opencv/src/main/res
.”path_to_your_project/opencv/src/
and copy path_to_opencv_sdk/sdk/native
folder into it.”CMakeLists.txt
at opencv/
folder. Add the following:cmake_minimum_required(VERSION 3.4.1)
set(OpenCV_DIR "src/sdk/native/jni")
find_package(OpenCV REQUIRED)
message(STATUS "OpenCV libraries: ${OpenCV_LIBS}")
include_directories(${OpenCV_INCLUDE_DIRS}
opencv/gradle.build
file. Add instructions to specific blocks (file provided in sample code). Also in the same build file, make sure your target and min Sdks matches your project’s Sdk version:minSdkVersion rootProject.ext.minSdkVersion
targetSdkVersion rootProject.ext.targetSdkVersion
android/cpp/CMakeLists.txt
file to include OpenCV:set(OpenCV_DIR "../opencv/src/sdk/native/jni")
find_package(OpenCV REQUIRED)
message(STATUS "OpenCV libraries: ${OpenCV_LIBS}")
target_link_libraries(YOUR_TARGET_LIB ${OpenCV_LIBS})
app/gradle.build
file, under dependencies
block:implementation project(':opencv')
pod install
inside ios folder, and then open the file mobileAfp.xcworkspace
with Xcode to start working. In addition, I strongly recommend that Xcode and iOS to be updated to latest versions. At the time of preparing this tutorial, the latest versions were 11.3 for Xcode and 13.3 for iOS. Also, make sure you have record audio permission in your plist file.RecAndFp.m
) inside ios root folder. Use Xcode interface to avoid potential linking problems. go to file -> new -> choose Objective-C file and choose the default group to be placed in ios root folder. The implementation should follow standards of react-native (check sample code or Facebook tutorial for more info). We will export three methods like the following:#import "React/RCTBridgeModule.h"
@interface
RCT_EXTERN_MODULE(RecAndFp, NSObject)
RCT_EXTERN_METHOD(start_recording)
RCT_EXTERN_METHOD(stop_recording)
RCT_EXTERN_METHOD(fp :(RCTPromiseResolveBlock)resolve rejecter:(RCTPromiseRejectBlock)reject)
@end
RecAndFp.swift
. In Xcode go to file -> new -> choose Swift file then choose the default group. This file should contain our implementation for the three functions listed above (Note: for now you may want test linking using just print messages). mobileAfp-Bridging-Header.h
should be created. You should be prompted by Xcode to do so. The name here is critical. The bridge should include the following for now:#import <Foundation/Foundation.h>
#import <React/RCTBridgeModule.h>
start_recording()
function and test it out. objective-C ++
class in the ios root folder. call it fingerprint.mm
. Notice the double .mm extension here. In Xcode go to file -> new -> choose Objective-C file and choose the default group. Don’t forget to double check the file name.fingerprint.mm
. Go to file -> new -> choose header file and name it fingerprint.h.
#import "fingerprint.h"
opencv2.framework
and add it. This should automatically copy the framework file into your project directory.”assetslibrary.framework
. Make sure it is included first before opencv (notice the order).$(PROJECT_DIR)/
as a framework path under your target settings. Go to your target, then “Under Build Settings ->Framework Search Paths, add the path: $(PROJECT_DIR)/
.That’s pretty much it. Remember to follow the steps carefully and make sure everything works before moving to the next step. This is a non-trivial task. Took me some time to set things up.
If you ran into problems, always try the do the following first:
start_recording,stop_recording
) from JS. For iOS In order to run this sample code a target of ios10 and above is required. In Xcode go to project -> target -> Deployment info -> and choose your target OS version.npm install
inside the project directory). Press to record. Press again to stop and fingerprint the audio. The fingerprints will then be logged in console.The project will not be ready to run out of the box. You have at least to include OpenCV and make sure you have set the NDK for Android.
If you like the post please share it and let me know. Additionally, If you believe there is a mistake/misconception in either this article or the sample code please contact me, or you can use Github issues.