A Guide to Using LLMs Securely
Learn how to harness the power of LLMs while ensuring data privacy and compliance.

The Rise of Large Language Models
Large Language Models (LLMs) such as the GPT or Claude families are transforming the landscape of artificial intelligence. They offer incredible capabilities for tasks ranging from content generation to complex problem-solving. Even back in the early days of GPT-4, research found that workers using GPT-4 completed on average 12% more tasks, at 25% faster work speeds, and with 40% higher quality. In fact, 47% of employees believe they will use AI for more than 30% of their day-to-day tasks within a year.
Data Privacy Concerns with LLM Usage
For most users, leveraging these models means accessing them through their respective websites. However, you might rightfully worry about where your information is going and how it is being used. Although companies like OpenAI offer some level of protection, such as turning off history, this may not be sufficient for information that is sensitive, classified, or secret.

GDPR Compliance and Protecting Sensitive Information
In the UK and the EU, compliance with the General Data Protection Regulation (GDPR) is crucial. Mishandling data could lead to significant repercussions beyond regulatory penalties, affecting company secrets, intellectual property, and proprietary information. Ensuring these elements remain secure is essential for maintaining a competitive advantage and safeguarding core business assets.

Harnessing LLMs Securely
Successfully harnessing the power of Large Language Models (LLMs) while ensuring data privacy and compliance can be challenging. There are several methods to utilise these powerful tools securely, each with its own set of advantages and limitations. Below, we discuss two primary approaches: running models locally and using cloud platforms through Models as a Service (MaaS).
Running Models Locally
One way to harness LLMs securely is by downloading a model and running it on a local server. However, this approach limits you to open-source models and requires substantial hardware resources. For example, running Llama 3.1 70B would need a high-end processor with multiple cores and a minimum of 32 GB of RAM, preferably 64 GB or more. For GPU requirements, you would need 2-4 NVIDIA A100 (80 GB) in 8-bit mode or 8 NVIDIA A100 (40 GB) in 8-bit mode, and even then, you’re still only running Llama 3.1. This setup is likely to clock in at least £250,000 in startup costs alone.
.webp&w=3840&q=75)
Cloud Platforms: Models as a Service (MaaS)
A more accessible and practical alternative is to use cloud platforms that offer "Models as a Service" (MaaS). Platforms such as Microsoft Azure, Google Cloud, and Amazon Web Services allow you to use LLMs without directly signing up with model providers like OpenAI. This approach offers a high level of control over data governance, tenancy, and sovereignty. You can ensure your data never leaves your tenant and, in many cases, never even leaves your local region.

Benefits of Cloud Platforms
Cloud platforms enable workers to use LLMs for their tasks with the assurance that data is secure and private. These platforms are designed for deployment at scale, providing access to a broad spectrum of models, including those from OpenAI, Mistral, and Anthropic. Below are the key benefits of using Models as a Service (MaaS).
Access to a Variety of Models
Cloud platforms provide access to various high-quality models without the need for direct sign-ups with model providers like OpenAI. This ensures you can leverage the best-in-class models for your tasks.
Flexible Pricing Models
These platforms offer flexible "pay-as-you-go" pricing, avoiding the need for premium memberships. For example, GPT-4o-mini costs $0.15 per million tokens for input and $0.60 for output, with estimated monthly usage between $1-10 per user. This is cost-effective compared to typical $20/month subscriptions.
Enhanced Data Security
Using cloud platforms, you can ensure your data never leaves your tenant and, in many cases, never even leaves your local region. This provides a high level of control over data governance, tenancy, and sovereignty, ensuring that your data remains secure and private.
ADSP’s Endorsement
“It was important to provide our team a confidential and secure alternative to externally-hosted proprietary models, as we believed that LLM use was both necessary and inevitable for our work."
Ross Witeszczak, Founding Partner at ADSP
Discover more AI Insights and Blogs

Transforming Administration: The AI Solutions You Need Today
Transforming admin tasks with ADSP's AI: from customer service to automated data entry and resource scheduling.

5 AI Advancements You Might Have Missed in 2024!
Discover five impactful AI breakthroughs of the year that you might have missed.

The AI Canvas Newsletter #18
Delve into the latest AI advancements: Boston Dynamics' Electric Atlas, Llama 3, Udio, Grok-1.5V, The Silicon Shift and more.

Looking for Secure Expert MaaS Solutions?
Discover how our AI solutions can transform your business by setting up secure, scalable MaaS solutions tailored to your needs.