top of page
Technology
Our strategy is to use as much as possible the existing libraries and cloud-based services for machine learning—those that are most appropriate for the particular application—and only develop a proprietary software stack and infrastructure when necessary.
OpenAI platform
-
its pre-trained Large Language Models (LLMs) GPT-4o [mini] and prior models, APIs to GPTs, Assistants API
-
DALL-E, Whisper
Google's Vertex AI
-
Gemini 1.5 Pro and Flash, 150+ foundation models
-
Google AutoML
-
CloudSQL, Cloud Storage
-
scalable GPU/TensorFlow Processing Unit (TPU) clusters
Python Machine Learning Libraries
-
science-kit learn
-
Keras and TensorFlow for deep neural networks
-
PyTorch
-
Python-Colab notebooks and Jupyter notebooks
Amazon Web Services (AWS)
-
its pre-trained Large Language Models (LLMs)
-
EC2 GPU cluster
-
predefined machine learning instances
-
Python Machine Learning Libraries as listed above
Microsoft Azure
'open-source' LLMs
a ranking of the openness of LLMs, such as OLMo 7B Instruct, Stanford Alpaca, Mistral, LlaMA, was published at the 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’24)
bottom of page