top of page

Technology

Our strategy is to use as much as possible the existing libraries and cloud-based services for machine learning—those that are most appropriate for the particular application—and only develop a proprietary software stack and infrastructure when necessary.

OpenAI platform 
  • Pre-trained frontier Large Language Models (LLMs) — latest GPT-5.2 series (flagship for advanced reasoning, coding, agentic workflows, multimodal tasks, professional knowledge work; includes GPT-5.2 pro for precision, GPT-5.2 Instant/Thinking/Pro variants, GPT-5.2-Codex for long-horizon coding/agents), plus stable GPT-5 / GPT-5.1 options and mini/nano for efficiency
  • Specialized models — Whisper (speech recognition/transcription), TTS models (high-quality text-to-speech), GPT Image 1.5 (state-of-the-art image generation), Sora 2 (advanced video generation with synced audio), and multimodal capabilities across text, vision, audio, and video
  • OpenAI APIs for seamless interaction — Chat Completions (core text/reasoning), Responses API (optimized for agentic/coding tasks), Realtime API (natural voice agents), Assistants/Agents tools (building/deploying intelligent agents with tools, memory, and orchestration), plus dedicated endpoints for images, audio, embeddings, moderation, and more
Google's Vertex AI
Python Machine Learning Libraries
  • science-kit learn, the essential library for classical machine learning (classification, regression, clustering, preprocessing, model evaluation, and pipelines)
  • Keras and TensorFlow robust for scalable deep learning, production deployment, and enterprise use
  • Hugging Face Transformers, pre-trained models and tools for NLP, vision, audio, and multimodal AI
  • Python-Colab notebooks and Jupyter notebooks, essential interactive tools for development, prototyping, and collaborative ML projects
Amazon Web Services (AWS) 
  • Amazon Bedrock, fully managed service with access to leading pre-trained foundation models (LLMs and multimodal), including Amazon Nova series (Micro, Lite, Pro), Anthropic Claude (e.g., Opus/Sonnet 4.x), Mistral, Meta Llama, and more — plus tools for agents, fine-tuning, reinforcement fine-tuning (RFT), Knowledge Bases, guardrails, and cost optimization
  • Amazon SageMaker AI, unified platform for building, training, customizing, and deploying machine learning and generative AI models (includes SageMaker Unified Studio, JumpStart for pre-built solutions, serverless customization, HyperPod for large-scale training, and MLOps)
  • Scalable EC2 GPU instances and accelerated computing clusters (latest: G7e with NVIDIA RTX PRO 6000 Blackwell GPUs, P6 series with Blackwell B200/GB200, plus Trainium/Inferentia chips for efficient ML workloads)
Microsoft Azure 
  • Azure OpenAI Service (via Azure AI Foundry Models) — fully managed access to leading foundation and reasoning models from OpenAI
  • Microsoft Copilot Studio and Azure AI Bot Service — low-code/no-code platform for building, testing, and deploying intelligent conversational AI bots and agents (multi-channel support across Teams, web, Slack, etc.; integration with Azure AI for natural language, speech, and complex orchestration; extendable for enterprise workflows with human-in-the-loop)
  • Scalable GPU instances and accelerated compute clusters (e.g., ND-series with NVIDIA GPUs, plus custom accelerators for efficient training/inference of large models)
'open-source' LLMs 
a ranking of the openness of LLMs, such as OLMo 7B Instruct, Stanford Alpaca, Mistral, LlaMA, was published at the 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’24)
bottom of page