Technology
Technology & Tools
The models, platforms, and tools we use — and why we choose them.
10 questions answered
01
Which AI models do you use — OpenAI, Claude, Gemini?
We work with all major providers: OpenAI (GPT-4o), Anthropic (Claude 3.5), Google (Gemini Pro), and leading open-source models like Mistral and LLaMA. We select the best model for each specific task based on capability, cost, and data privacy requirements.
02
Are you tied to one AI provider or model-agnostic?
Fully model-agnostic. We have no commercial relationship with any AI provider. We choose models based entirely on what delivers the best result for your use case — not what pays us a commission.
03
What automation tools do you use — n8n, Make, Zapier?
We use n8n and Make as our primary automation platforms for most projects. For simpler integrations, Zapier works well. For complex, high-volume workflows, we build custom Python-based pipelines.
04
Do you use open-source AI models?
Yes, when appropriate. Open-source models (Mistral, LLaMA, Phi) are particularly useful when data privacy is critical and you need models running on your own infrastructure rather than third-party APIs.
05
Can you work with my existing tech stack?
Yes. We build around what you already have. We’ve integrated AI with over 50 different tools and platforms. We don’t force a technology change — we enhance what’s working.
06
What is LangChain and do you use it?
LangChain is a framework for building AI applications — particularly for chaining together AI models, tools, and data sources. We use it for complex multi-step AI pipelines, RAG systems, and agent architectures.
07
What is a vector database and when do I need one?
A vector database (like Pinecone or Supabase pgvector) stores information in a way that allows AI to search by meaning, not just keywords. You need one when building RAG systems — so AI can retrieve the right internal documents to answer questions accurately.
08
Do you use GPT-4, Claude, or build custom models?
Primarily GPT-4o and Claude 3.5 for most production deployments — they offer the best balance of capability and reliability. We fine-tune custom models when you have specific domain knowledge that general models lack.
09
Can you fine-tune an AI model on my own data?
Yes. Fine-tuning trains a model specifically on your data, making it more accurate for your domain. We handle data preparation, training, evaluation, and deployment. It’s particularly valuable for industry-specific terminology or classification tasks.
10
Do you build on cloud platforms like AWS, Azure, or GCP?
Yes. We deploy on all major cloud platforms depending on your existing infrastructure and preferences. We also work with on-premise deployments for clients with strict data residency requirements.
No matching questions found. Try a different search term.
Still have questions?
Book a free 30-minute discovery call. We’ll answer anything and show you exactly what AI can do for your business.
Talk to Congni Tech →