Clean • Professional
Types of LLM Integration refer to different ways of connecting applications with AI models to build intelligent and scalable systems
These include approaches like Direct API, Multi-Provider, Local LLM, Framework-Based, RAG, Hybrid, and Agent-Based integration

Example (Spring Boot using RestTemplate)
import org.springframework.http.*;
import org.springframework.web.client.RestTemplate;
public class OpenAIService {
private static final String API_URL = "https://api.openai.com/v1/chat/completions";
private static final String API_KEY = "YOUR_API_KEY";
public String getResponse(String userInput) {
RestTemplate restTemplate = new RestTemplate();
String requestBody = """
{
"model": "gpt-4o-mini",
"messages": [
{
"role": "user",
"content": "%s"
}
]
}
""".formatted(userInput);
HttpHeaders headers = new HttpHeaders();
headers.setContentType(MediaType.APPLICATION_JSON);
headers.setBearerAuth(API_KEY);
HttpEntity<String> entity = new HttpEntity<>(requestBody, headers);
ResponseEntity<String> response = restTemplate.exchange(
API_URL,
HttpMethod.POST,
entity,
String.class
);
return response.getBody();
}
}

Example
public class LLMRouterService {
public String getResponse(String provider, String input) {
if (provider.equals("openai")) {
return callOpenAI(input);
} else if (provider.equals("gemini")) {
return callGemini(input);
} else {
return "Unsupported provider";
}
}
private String callOpenAI(String input) {
return "Response from OpenAI for: " + input;
}
private String callGemini(String input) {
return "Response from Gemini for: " + input;
}
}
Example (Ollama API)
import org.springframework.web.client.RestTemplate;
public class LocalLLMService {
public String askLocalModel(String prompt) {
RestTemplate restTemplate = new RestTemplate();
String url = "<http://localhost:11434/api/generate>";
String request = """
{
"model": "llama3",
"prompt": "%s",
"stream": false
}
""".formatted(prompt);
return restTemplate.postForObject(url, request, String.class);
}
}

Example
import org.springframework.ai.chat.client.ChatClient;
import org.springframework.stereotype.Service;
@Service
public class AIService {
private final ChatClient chatClient;
public AIService(ChatClient chatClient) {
this.chatClient = chatClient;
}
public String getAnswer(String prompt) {
return chatClient.prompt()
.user(prompt)
.call()
.content();
}
}
Example Controller
import org.springframework.web.bind.annotation.*;
@RestController
@RequestMapping("/ai")
public class AIController {
private final AIService aiService;
public AIController(AIService aiService) {
this.aiService = aiService;
}
@GetMapping("/ask")
public String ask(@RequestParam String q) {
return aiService.getAnswer(q);
}
}

Example (Routing Logic)
public class HybridLLMService {
private final LocalLLMService localLLMService;
private final OpenAIService openAIService;
public HybridLLMService(LocalLLMService localLLMService, OpenAIService openAIService) {
this.localLLMService = localLLMService;
this.openAIService = openAIService;
}
public String getResponse(String input, boolean sensitiveData) {
if (sensitiveData) {
return localLLMService.askLocalModel(input);
} else {
return openAIService.getResponse(input);
}
}
}
Example (Simple Retrieval + LLM Flow)
public class RAGService {
private final DocumentService documentService;
private final OpenAIService openAIService;
public RAGService(DocumentService documentService, OpenAIService openAIService) {
this.documentService = documentService;
this.openAIService = openAIService;
}
public String askQuestion(String question) {
String context = documentService.searchRelevantData(question);
String finalPrompt = "Answer based on this context: " + context +
" Question: " + question;
return openAIService.getResponse(finalPrompt);
}
}
Example Use Case Flow

Example (Tool Calling Agent Style)
public class AIAgentService {
private final EmailService emailService;
private final MeetingService meetingService;
public String processTask(String task) {
if (task.contains("email")) {
emailService.sendEmail("[email protected]", "Hello from AI");
return "Email sent successfully";
}
if (task.contains("meeting")) {
meetingService.scheduleMeeting("Team Sync");
return "Meeting scheduled successfully";
}
return "No valid action found";
}
}
| Feature | Direct API Integration | Framework-Based Integration |
|---|---|---|
| Approach | Directly calls LLM APIs | Uses frameworks like Spring AI or LangChain |
| Complexity | Low to Medium | Low (but structured) |
| Code Handling | Manual request/response handling | Abstracted and simplified |
| Development Speed | Slower | Faster |
| Scalability | Limited | High |
| Best For | Small or simple projects | Production-grade applications |
| Feature | Normal LLM Integration | RAG (Retrieval Augmented Generation) |
|---|---|---|
| Data Source | Only trained model data | External data + LLM |
| Knowledge Base | Static knowledge | Dynamic + custom knowledge |
| Accuracy for business data | Low | High |
| Use of Documents/DB | Not used | Uses PDFs, DB, files, APIs |
| Complexity | Simple | Medium to High |
| Best For | General chatbot | Enterprise AI systems |
LLM integration is a modern approach to build intelligent applications by connecting your system with powerful AI providers.
With proper architecture like Direct API, Multi-Provider, Local LLM, or Framework-Based integration, developers can build scalable, secure, and production-ready AI systems.