You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The MATLAB version of the GUI is no longer supported (old [documentation](docs/svd_matlab_tutorial.md)).
30
+
The MATLAB version of the GUI is no longer supported (see old [documentation](https://github.com/MouseLand/facemap/blob/main/docs/svd_matlab_tutorial.md)).
31
31
32
32
## Installation
33
33
@@ -42,6 +42,14 @@ If you are using a GPU, make sure its drivers and the cuda libraries are correct
42
42
5. To install the minimal version of facemap, run `python -m pip install facemap`.
43
43
6. To install facemap and the GUI, run `python -m pip install facemap[gui]`. If you're on a zsh server, you may need to use ' ' around the facemap[gui] call: `python -m pip install 'facemap[gui]'.
44
44
45
+
For Macbook users with M1/M2 please follow the following instructions for installation:
46
+
1. Install an [Anaconda](https://www.anaconda.com/products/distribution) distribution of Python. Note you might need to use an anaconda prompt if you did not add anaconda to the path.
47
+
2. Open an anaconda prompt / command prompt which has `conda` for **python 3** in the path
48
+
3. Create a new environment with `conda create -y -n facemap python=3.9 pyqt imagecodecs pyqtgraph matplotlib`
49
+
4. To activate this new environment, run `conda activate facemap`
50
+
5. Next, install the facemap in the environment: `pip install facemap`
51
+
6. Finally, run `python -m facemap` to launch facemap GUI.
52
+
45
53
To upgrade facemap (package [here](https://pypi.org/project/facemap/)), run the following in the environment:
46
54
47
55
~~~sh
@@ -86,7 +94,7 @@ Facemap supports grayscale and RGB movies. The software can process multi-camera
86
94
87
95
'.mj2','.mp4','.mkv','.avi','.mpeg','.mpg','.asf'
88
96
89
-
For more details, please refer to the [data acquisition page](docs/data_acquisition.md).
97
+
For more details, please refer to the [data acquisition page](https://github.com/MouseLand/facemap/blob/main/docs/data_acquisition.md).
90
98
91
99
## Support
92
100
@@ -118,7 +126,7 @@ Facemap provides a trained network for tracking distinct keypoints on the mouse
118
126
119
127
Keypoints will be predicted in the selected bounding box region so please ensure the bounding box focuses on the face. See example frames [here](figs/mouse_views.png).
120
128
121
-
For more details on using the tracker, please refer to the [GUI Instructions](docs/pose_tracking_gui_tutorial.md). See [command line interface (CLI) instructions](docs/pose_tracking_cli_tutorial.md) and for more examples, please see [tutorial notebooks](https://github.com/MouseLand/facemap/tree/dev/notebooks).
129
+
For more details on using the tracker, please refer to the [GUI Instructions](https://github.com/MouseLand/facemap/blob/main/docs/pose_tracking_gui_tutorial.md). Check out the [notebook](https://github.com/MouseLand/facemap/blob/main/docs/notebooks/process_keypoints.ipynb) for processing keypoints in colab.
@@ -134,7 +142,7 @@ Facemap allows pupil tracking, blink tracking and running estimation, see more d
134
142
135
143
You can draw ROIs to compute the motion/movie SVD within the ROI, and/or compute the full video SVD by checking `multivideo`. Then check `motSVD` and/or `movSVD` and click `process`. The processed SVD `*_proc.npy` (and optionally `*_proc.mat`) file will be saved in the output folder selected.
136
144
137
-
For more details see [SVD python tutorial](docs/svd_python_tutorial.md) or [SVD MATLAB tutorial](docs/svd_matlab_tutorial.md).
145
+
For more details see [SVD python tutorial](https://github.com/MouseLand/facemap/blob/main/docs/svd_python_tutorial.md) or [SVD MATLAB tutorial](https://github.com/MouseLand/facemap/blob/main/docs/svd_matlab_tutorial.md).
138
146
139
147
([video](https://www.youtube.com/watch?v=Rq8fEQ-DOm4) with old install instructions)
140
148
@@ -149,5 +157,5 @@ The encoding model used for prediction is described as follows:
Please see neural activity prediction [tutorial](docs/neural_activity_prediction_tutorial.md) for more details.
160
+
Please see neural activity prediction [tutorial](https://github.com/MouseLand/facemap/blob/main/docs/neural_activity_prediction_tutorial.md) for more details.
This tutorial shows how to use the deep neural network encoding model for neural prediction using mouse orofacial behavior.
3
+
This tutorial shows how to use the deep neural network encoding model to predict neural activity based on mouse orofacial behavior.
4
4
5
-
To process neural activity prediction using pose estimates extracted using the tracker:
5
+
To process neural activity prediction using pose estimates extracted using the keypoint tracker:
6
6
7
-
1. Load or process keypoints ([see pose tracking tutorial](docs/pose_tracking_gui_tutorial.md)).
8
-
2. Select `Neural activity` from file menu to `Load neural data`.
9
-
3. Load neural activity data (2D-array stored in *.npy) and (optionally) timestamps for neural and behavioral data (1D-array stored in *.npy) then click `Done`.
10
-
4. Select `Run neural prediction` from the `Neural activity` file menu.
11
-
5. Select `Keypoints` as input data and set whether the output of the model's prediction to be `neural PCs` or neural activity. Use help button to set training parameters for the model.
12
-
5. The predicted neural activity *.npy file will be saved in the selected output folder.
7
+
1. Load or process keypoints ([see pose tracking tutorial](https://github.com/MouseLand/facemap/blob/main/docs/pose_tracking_gui_tutorial.md)).
8
+
2. Select `Neural activity` from file menu.
9
+
3. Click on `Launch neural activity window`.
10
+
4. Select `Load neural activity` (2D-array stored in *.npy) and (optionally) timestamps for neural and behavioral data (1D-arrays stored in *.npy) then click `Done`.
11
+
4. Once the neural data is loaded, click on `Run neural predictions`.
12
+
5. Select `Keypoints` as input data and select one of the options for output of the model's prediction, which can be `Neural PCs` or neural activity. CLick on `Help` button for more information.
13
+
5. The predicted neural activity (*.npy) file will be saved in the selected output folder.
13
14
14
-
To process neural activity prediction using pose estimates extracted using the tracker:
15
+
To predict neural activity using SVDs from Facemap:
15
16
16
-
1. Load or process SVDs for the video. ([see SVD tutorial](docs/svd_tutorial.md)).
17
+
1. Load or process SVDs for the video. ([see SVD tutorial](https://github.com/MouseLand/facemap/blob/main/docs/svd_python_tutorial.md)).
17
18
2. Follow steps 2-5 above.
18
19
19
20
Note: a linear model is used for prediction using SVDs.
20
21
21
-
Predicted neural activity will be plotted in the bottom-right window of the GUI. You can highlight test data by selecting `Highlight test data` from the `Neural activity` file menu. Further information about neural prediction, including variance explained can be found in the saved neural prediction file.
22
+
Predicted neural activity will be plotted in the neural activity window. Toggle `Highlight test data` to highlight time segments not used for training i.e. test data. Further information about neural prediction, including variance explained can be found in the saved neural prediction file (*.npy).
0 commit comments