Justin Georgi 71b7b16bdd
All checks were successful
Build Dev PWA / Build-PWA (push) Successful in 37s
Add router debugging keys
Signed-off-by: Justin Georgi <justin.georgi@gmail.com>
2024-10-16 09:16:19 -07:00
2024-03-13 10:36:24 -07:00
2024-08-21 15:48:55 -07:00
2024-09-11 11:46:37 -07:00
2024-10-16 09:16:19 -07:00
2024-03-29 11:44:40 -07:00
2023-12-07 20:39:10 -07:00
2024-01-11 21:02:23 -07:00
2024-09-11 16:32:55 -07:00
2024-09-11 16:32:55 -07:00
2023-11-08 16:47:43 -07:00
2024-09-11 11:46:37 -07:00
2023-11-14 10:35:03 -07:00
2023-11-08 16:47:43 -07:00

ALVINN

Anatomy Lab Visual Identification Neural Net (A.L.V.I.N.N.) is a f7 based app for using a computer vision neural net model to identify anatomical structures in photographic imagery.

Install

  • Android: Download the latest Android apk in packages and open the downloaded file to install.
  • iOS: To do
  • Web app: Download the latest Web zip file in packages and extract the files to a folder location available via web access then visit that location in your web browser.
  • Run from source: Clone this repository and in the root directory run npm install followed by npm start. For more information see f7 info.

Quick Start

  1. Select the region of the body you want to identify structures from. The regions are:
    • Thorax and back
    • Abdomen and pelvis
    • Limbs
    • Head and neck
  2. Load an image in one of the following ways:
    • Click on the camera icon to take a new picture.
      • ALVINN will highlight areas with potential structures as you aim the camera.
      • Press Capture to use the current camera view.
    • Click on the image file icon to load a picture from the device storage.
    • If demo mode is turned on, you can click on the marked image icon to load an ALVINN sample image.
  3. When the picture is captured or loaded, any identifiable structures will be listed as tags below the image:
    • Click on each tag to see the structure highlighted in the image or click on the image to see the tag for that structure (additional clicks to the same area will select overlapping structres).
    • Tag color and proportion filled indicate ALVINN's level of confidence in the identification.
    • An incorrect tag can be deleted by clicking on the tag's X button.

Advanced Features

Detection Parameters

If there are potential structures that do not satisfy the current detection settings, a badge on the detection menu icon will indicate the number of un-displayed structures. Clicking on the detection menu icon will open a menu of tools to adjust the detection settings. This button will make three tools available:

  1. Confidence slider: You can use the slider to change the confidence threshold for identifying structures. The default threshold is 50% confidence.
  2. Refresh detections: If there has been a permanent change to the structures detections, such as deleting a tag, the detection list can be reset to its original state.
  3. Structure list: you can view a list of all the structures available for detection in that region and select/deselect individual structures for detection.

Submitting Images

If all of the detection tags that are currently visible have been viewed, then the final button (cloud upload) on the detection menu will be enabled. This button will cause the image and the verified structures to be uploaded to the ALVINN project servers where that data will be available for further training of the neural net. If after the image has been uploaded, the available detection tags change, then the option to re-upload the image will be available if all the new tags have been viewed and verified.

Configuration

Configuring aspects of the hosted ALVINN PWA is done through the conf.yaml file in the conf folder.

Site settings

The following site settings are avaible:

name description values default
agreeExpire number of months before users are shown the site agreement dialog again
set to 0 to display dialog on every reload
integer >= 0 3
demo set to true to enable demo mode by default boolean false
regions array of regions names to enable thorax, abdomen, limbs, head [thorax, abdomen, limbs, head]
useExternal detemines the ability to use an external detection server:
none - external server cannot be configured
optional - external server can be configured in the app's settings page
list - external server can be selected in the app's settings page but only the configured server(s) may be selected
required - external server settings from conf file will be used by default and disable server options in the settings page
none, optional, list, required optional
disableWorkers force app to use a single thread for detection computations instead of multi threading web workers boolean optional
external properties of the external server(s) ALVINN may connect to
This setting must be a single element array if useExternal is set to required.
This setting must be an array of one or more elements if useExternal is set to list
external server settings array []
infoUrl root url for links to information about identified structures
Structure labels with spaces replaced by underscores will be appended to this value for full information links (e.g., Abdominal_diapragm)
string info link not shown

External server settings

ALVINN can use an external object detection server instead of the built in models; settings for that external server are configured here. These settings must be configured if site - useExternal is set to list or required.

name description default
name identifier for external server none
address ip or url of external server none
port port to access on external server 9001

The external server's response must be json with a detections key that contains an array of the detected structure labels, bounding box data, and confidence values.

{
  "detections": [
    {"top": 0.1, "left": 0.1, "bottom": 0.9, "right": 0.9, "label": "Aorta", "confidence": 90.0 }
    ...
  ],
}
Description
Framework7 version of ALVINN interface.
Readme 422 MiB
2024-08-21 22:52:26 +00:00
Languages
Vue 65.9%
JavaScript 26.9%
CSS 5.8%
HTML 1.4%