- 👋 | habari | sannu | здравей | नमस्कार | Witam | سلام| hello 🌍 🌏 🌎
- 👀 We design and build language technology for low resource and indigenous speakers. The first 50 languages cover 2.8 billion speakers.
- 🌱 Our models train on Universal Dependencies annotated corpus: parsing CoNLL-U annotated corpus precisely into PyTorch bi-LSTM models that train on Intel Gaudi2 GPUs.
- 🔬 We train local language foundation models for lexical and syntactic performance, prioritising features of morphologically complex languages / dialects.
- 🔧 Foundation models are further developed to be useful: named entity resolution, sentiment analysis, semantic role labeling, question and answer, information extraction and search.
- 😲 Fine tuning is then exposed to extend for customer corpus.
- 📡 We deploy seamlessly to Cloud, Desktop and Mobile (Android) using Intel openVINO.
- 💞️ We joined the Intel Liftoff program in early 2024 and the Denvr Ai Ascend program in August 2025.
- 🔓 All public data and models are open sourced, see license.txt for details in each section.
- 📫 You can easily search @bezokurepo for open source models and data in the github search bar. If you need support, email ian.gilmour@bezoku.ai
Member of the Intel Liftoff and Denvr Ai Ascend programs.
CoNLL-U format annotated corpus, bi-LSTM models trained on Intel Gaudi2 GPUs.
Pinned Loading
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.