Frontal pose is now a string instead of an enum. This way, it is possible not to specify a value when using the Search API to search for both FRONTAL
and NON_FRONTAL
poses.
Retraining face recognition models on identity updates should be faster than before.
Fixed an issue with identities sometimes not being retrained properly.
Fixed an issue with Tensorflow spamming the Temp directory with DLLs when the Engine reboots.
Set the "no DB" mode as the default mode, so that the Engine won't crash on startup without a Postgres DB running. To run with a Database, re-enable it in the configuration file.
Face recognition now returns the sharpness and frontal pose of a detected face. The sharpness is a value between 0.0
and 1.0
, and represents how clear a face is. Using sharper faces for training will yield better results. The frontal pose can be either FRONTAL
or NON_FRONTAL
. FRONTAL
faces are faces that are facing the camera directly. Searching of faces has been updated with this parameter (minSharpness
and frontalPose
).
Face results now also return an optional "Recognized Identity ID".
Calls to the Classification endpoint will be more efficient when no exclusion zones are provided.
The old Summary API was removed.
The HTTP mode was removed.
Detection and Face Recognition will now return not only bounding_box
(unchanged), but also normalized_bounding_box
. Normalized bounding box's coordinates are double
values between 0.0 and 1.0.
The C++ Client files are now delivered in separate folders:
lib
for libava_engine_client.a
playground
for code examplesinclude
for the restFixed an issue with the GPU Setup sometimes causing an Error 2755 on Windows.
Fixed a bug where the config file (conf/application.conf
) sometimes could not be found.
When we reach the limit for the list faces
endpoint, we will now return the most recent faces instead of the oldest ones. In that scenario, you should now handle pagination by decreasing end
instead of increasing start
to fetch all faces for a given period of time.
Requests can now specify start
, end
and limit
to handle pagination.
Responses will return not only the list of cluster sets, but also the parameters from the request (group_id
, custom_id
, start
, end
and limit
).
Clustering errors will now return code NOT_FOUND
if a given Face ID doesn't exist or INVALID_ARGUMENT
if a Face doesn't belong to the right Group, instead of generic UNKNOWN
errors.
Fixed a bug sometimes causing the Engine to crash at startup in "No DB" mode.
The Engine now works with a Postgres database instead of SQLite. The database
configuration now has 3 parameters: enabled
(boolean), auto-migrate
(boolean) and url
(string).
In case auto-migrate
is set to false
, the Engine comes with an external DB Migrator tool. It is located in bin/migrate-db[.bat]
.
Zones can now be excluded from every classification, detection and face recognition calls. Refer to the API specification for more information about the format.
Pay attention to the fact that these changes came with breaking changes to the Search API responses.
It is now possible to flag one of the faces of an identity as the cover image of this person. Look for updates to AddIdentityRequest
and UpdateIdentityRequest
to find out more about this feature.
gRPC responses now contain more details than before and include a gRPC Status. See the list of gRPC Status codes for more information.
The API to compute a cluster has changed: instead of clustering just the last 1000 faces for a group, it now takes a list of faces to cluster. This is a breaking change. Refer to the API reference for more information about this endpoint.
The default value of max faces per identity is now back to 50 instead of 1000. It can be customized in the clustering
config.
Engine status will now be logged every 15 seconds.
A mock face detector can now be used. This can be useful for local developments. Just change face-detector
's type
to mock
in the config to use it.
An UpdateIdentityRequest
can now ask to re-add faces that were already assigned to this identity. No changes will be performed by doing so, but at least the request won't get rejected anymore.
A new API endpoint to list all faces for a given group ID was added. See the API reference for more information about this endpoint.
Parameters before
and after
have been renamed start
and end
. Pay attention to the fact that before
actually corresponds to the new end
parameter.
The name field is now optional when updating an existing Identity.
We've removed the Groups resource and the CRUD api's around it. GroupId is still a required field on face recognition API's, but you no longer need to create a group beforehand.
Configuration is now done via a file, not via CLI args. The file lives as conf/application.conf
.
The NVIDIA Graphics Card Drivers are now an optional feature that can be selected during the setup, disabled by default. It is also possible to choose not to install the CUDA and cuDNN various features individually.
We've added a concept to the Face Recognition feature called Groups. Groups allow segmenting a set of identities. An example use would be if your product had two physical locations. You don't want one locations identities to mix with the others. You can use Groups to separate your identities. See the examples for how this is used.
We've added a persistence
parameter to all the analytic calls. It used to be a boolean flag, should_save
which controlled whether images were saved in the DB and disk. It's now possible to save the image, save the image AND the result, or save nothing. The are 3 possible values: SAVE_ALL
, SAVE_RESULT
and SAVE_NOTHING
. See the examples for how this is used.
We've added functions around Clustering CRUD. You can now list and get individual cluster sets.
The linux version now comes with a *.deb package. This Debian package installs and sets up the Daemon as a SystemV service.
--retentionPeriod
was renamed --retention-period
--httpPort
was renamed --http-port
We've removed the following concepts:
Due to removing feeds, retention is now globally configured. Use the --retentionPeriod
flag to control it.
The Engine now experimentally supports being called via HTTP, as well as the usual gRPC calls. Because the API is primarily RPC based, every method is a POST. We don't have documentation around this feature yet (i.e. routes etc). If you are interested in using it, let us know and we can get you some more info.
This feature is behind a flag. To enable it, use the --http
and --httpPort
flags.
The Windows GPU installer now bundles CUDA and CUDNN.
A timeout in milliseconds can now be specified when creating the client.
Downgraded GRPC to version 1.10.1.
The Classification feature will now choose between a day and night model, depending on the contents of the image. This change should work transparently.
A per feed configurable retention period has been implemented. After N days have elapsed, images older than that will be deleted. This feature is behind a feature flag for now. Enable it with --retention
on the command line when starting the daemon.
This is a large release, with several breaking changes. You will need to:
The API has changed significantly (this is a beta product apologies in advance). There are no docs at this stage, so your best port of call is the example code, and asking on Slack.
Face recognition feature has been completed. Check the example apps to see how this works. There is support for face-clustering to help you train the system, however it's still in beta, and the API is likely to change.
Detection has been added. You will get bounding boxes around images.
Car classification is now supported. Just enable the feature with the car
class.
We now support automatic batching for classification. We ingest images via a queue, and push them into the classifier in batches if it's not busy. The client doesn't need to batch calls - the daemon does it for you. You won't see a speed increase, but you will see less drop off if you have many cameras (4+).
There is an optional should_save
parameter on feature detect calls, that tells the daemon to not save the result.
You can now search previous results for interesting events. The search works across classification, detection and face-rec result. Check the example applications for info on the search DSL.
map<string, FeatureArguments>
feed_id
AnalyticsResult
type. We used to share this type across all results (classification, face-recognition etc). Each feature now returns
it's own type.no-ml
. This flag will use dummy ML implementations, and return random results. Really only useful for testing.ava-engine --help
for more detailsThis is a large release, with several breaking changes. You will need to:
The API has changed significantly (this is a beta product apologies in advance). There are no docs at this stage, so your best port of call is the example code, and asking on Slack.
rm -rf
your models directory.The API now has a concept of 'Feeds'. A Feed is any image data source (e.g. camera or NVR)
The end goal is for us to provide better analytics/insight in the future (e.g. search capabilities on a specific set of cameras)
We recommend taking a look at the playground files in for a full example. It describes how to add/remove/list feeds
Note that this change will require the database to be destroyed and re-created (rm -rf
the sqlite3 database file)
client = AvaEngineClient()
# Enable features globally, features need to be enabled before it can be used.
client.initialize({
'features': {
'classification': {'classes': ['person']},
},
})
# Create Feed entities for cameras you wish use features for.
feed_id = '1NJdOI_1'
client.feeds.create(feed_id)
# Enable the feature for the camera
client.feeds.configure(feed_id, {
'features': {
'classification': {
'classes': ['person'],
},
},
})
# Perform analytics as you normally would but now also specify the `feed_id`.
client.classification.detect(feed_id, image_blobs, ['person'])
--model-path
to --models-path
for consistencyava-engine --help
for more detailsGetDetect
and plan to replace it with a more generic Search
endpoint. We suggest for now you store results from the Detect
call if you need refer to the results downstream.FeatureError
, FeedError
, AnalyticsError
, RepositoryError
, ServiceError
).playground.js
and Playground.cc
files are now included in the client distributionClassificationFeature.h
header file was missing in the C++ distributionThis is a large release, with several breaking changes. You will need to:
CLI args have changed
Usage:
ava-engine [options]
Options:
-p --port=<port> Socket port to bind to [default: 50051]
-d --db-path=<path> Path to directory to store or read Ava Engine's persistence layer [default: ./ava-engine.db]
-m --model-path=<path> Path to directory analytics models [default: ./models]
-l --license-path=<path> Path to read license from [default: ./LICENSE]
--reset-features Removes previously loaded features before starting the server
--max-message-size=<s> Maximum message size for requests and responses in megabytes [default: 10]
-h --help Print help usage information and quit
We've removed the concept of models from the user facing API.
The API no longer deals in models. It now deals in "features". The features available are:
You now enable the features you want when instantiating the API, and we take care of the rest.
All the client libraries (CPP, Python and JS) have been changed to reflect this. Please see the example code in each library.
All features now have an optional Feed ID field. This field represents the camera or feed the image is from.
Upgrade to Tensorflow 1.7. CUDA 9.0 and CUDNN 7.0.5 now required