This project consists of a JSON specification of our common import format for Tana, as well as a set of converters which turn other formats into this format.
If you need to do some something special with your data before putting it into Tana you can just fork this project and hack the current converters into doing what you need. As long as the resulting file follows the format you will be able to import it into Tana.
If you are making changes that you think will benefit other users, please create a pull request.
- Install Node.js 22.x https://nodejs.org/en/download/
- Use npm (comes with Node 22). No Yarn required.
- Download or git clone this tana-import-tools (or as-of-yet-unmerged branch you want to test, such as logseq)
- In that folder, in terminal, type
npm install - Export your existing PKM data (roam, logseq) to that folder and name it appropriately, e.g.,
logseq.json - Type the appropriate command for your conversion, e.g.,
npm run convert:logseq logseq.jsonwhere convert: can have roam, notion, logseq, or other formats - In Tana, go to the top right menu and
import - Hopefully everything worked! If not, report back to #tana-import-tools
- 🟢 graph
- 🟢 journal pages
- 🟢 references
- 🟢 headings
- 🟢 todos
- 🟢 images
- 🟢 dates
- 🟡 code blocks (no language support)
-
Click the 3 dots in the upper-right
-
Click "Export All"
-
Change the export format to JSON
-
Click "Export All"
-
npm run convert:roam datasets/my_roam_export.json
- 🟢 graph
- 🟢 journal pages
- 🟢 references
- 🟢 headings
- 🟢 dates
- 🟡 todos (TODO/LATER/DONE supported, NOW/DOING are made into TODO prefixed with NOW or DOING, and CANCELED is made into DONE prefixed with CANCELED)
- 🟡 logbook (imported as text)
- 🟡 images (only remote images without redirect are working. Local images/assets still not working)
- 🟡 code blocks (no language support)
- 🔴 simple queries
- 🔴 advanced queries
- 🔴 reference to supertag
- 🔴 favorites (not exported)
- 🔴 whiteboards
- 🔴 flashcards
-
Click the three dots in the upper right
-
Click "Export graph"
-
Click "Export as JSON"
-
npm run convert:logseq datasets/my_logseq_export.json
- 🟢 graph
- 🟢 tables - first column header will always be "Title" in Tana
- 🟢 headings
- 🟢 todos
- 🟢 frontmatter
- 🟢 code blocks
- 🟢 dates
- 🟢 images
- 🟢 graph
- 🟢 todos (workflowy incomplete todos are imported as text)
- 🔴 headings (not distinguished in OPML)
- 🔴 code blocks (exported as plaintext)
- 🔴 images (not exported)
- 🔴 boards (exported as plaintext)
- 🔴 comments (not exported)
- 🔴 node notes (exported in OPML)
- 🔴 favorites (not exported)
- 🔴 date references (exported in OPML)
-
Click the three dots in the upper right
-
Click "Export all"
-
Select "OPML" and click to download
-
npm run convert:workflowy datasets/my_workflowy_export.opml
- 🟢 graph (links, inline links)
- 🟢 journal pages (daily notes)
- 🟢 todos
- 🟢 headings
- 🟢 tables - first column header will always be "Title" in Tana
- 🟢 code blocks
- 🟢 tags (converted to supertags)
- 🟢 highlighted text
- 🟢 author field
- 🟡 events - dates are not handled
- 🟡 flags (marking as important) - converted to fields
- 🟡 reminders - converted to fields
- 🔴 images
- 🔴 comments on nodes
- 🔴 person assignments
- 🔴 divider (horizontal line)
- 🔴 recurring dates
-
You must be on the Desktop app for export functionality
-
Click "notebooks" on the left sidebar
-
Click the three dots next to the notebook you want to export
-
Click "Export notebook..."
-
Select all attributes and the "enex" format. Don't include the author attribute (or others) if you don't want them to show as fields in Tana.
Imports are placed in a new workspace to prevent potential conflicts.
-
Click the user profile icon in the upper right
-
Click "Import Content"
-
Click "Tana Intermediate Format" and navigate to the generated file.
We are always looking for new importers and as well as improvements to existing ones! Contributions from open-source developers are greatly appreciated.
Please check out our Contribution Guide first. Also, make sure you read our Code of Conduct