diff --git a/README.md b/README.md index 791d858..c3911a5 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,6 @@ # ALVINN -Anatomy Lab Visual Identification Neural Net (A.L.V.I.N.N) is a f7 based app for using a computer vision neural net model to identify anatomical structures in photographic imagery. +Anatomy Lab Visual Identification Neural Net (A.L.V.I.N.N.) is a f7 based app for using a computer vision neural net model to identify anatomical structures in photographic imagery. ## Install * **Android:** Download the latest Android apk in [packages](https://gitea.azgeorgis.net/Georgi_Lab/ALVINN_f7/packages) and open the downloaded file to install. @@ -21,13 +21,14 @@ Anatomy Lab Visual Identification Neural Net (A.L.V.I.N.N) is a f7 based app for * Click on the image file icon to load a picture from the device storage. * If demo mode is turned on, you can click on the marked image icon to load an ALVINN sample image. 1. When the picture is captured or loaded, any identifiable structures will be listed as tags below the image: - * Click on each tag to see the structure highlighted in the image. + * Click on each tag to see the structure highlighted in the image or click on the image to see the tag for that structure (additional clicks to the same area will select overlapping structres). * Tag color and proportion filled indicate ALVINN's level of confidence in the identification. - * If there are potential structures that do not satisfy the current detection threshold, a badge on the detection menu icon will indicate the number of un-displayed structures. + * An incorrect tag can be deleted by clicking on the tag's X button. ## Advanced Features ### Detection Parameters -After an image has been loaded and structure detection has been performed, the detection parameters can be adjusted using the third detection menu button (eye). +If there are potential structures that do not satisfy the current detection settings, a badge on the detection menu icon will indicate the number of un-displayed structures. +Clicking on the detection menu icon will open a menu of tools to adjust the detection settings. This button will make three tools available: 1. Confidence slider: You can use the slider to change the confidence threshold for identifying structures. The default threshold is 50% confidence. @@ -64,7 +65,7 @@ The external server's response must be json with a `detections` key that contain ``` { "detections": [ - {"top": 0.1, "left": 0.1, "bottom": 0.9, "right": 0.9, "label": "dog", "confidence": 90.0 } + {"top": 0.1, "left": 0.1, "bottom": 0.9, "right": 0.9, "label": "Aorta", "confidence": 90.0 } ... ], }