logo
Menu
Duck Tales - Considerations for Safeguarding Your GenAI Apps

Duck Tales - Considerations for Safeguarding Your GenAI Apps

A rubber duck will serve as our guide through 4 phases of GenAI usage, highlighting crucial considerations for safeguarding this technology.

Alena Schmickl
Amazon Employee
Published Apr 2, 2024
(Picture of rubber duck was generated with Amazon Titan Image Generation G1 model)
Have you ever been in a situation before where you suddenly arrived at the answer to a problem, simply by articulating the issue to someone else? Congratulations - you’ve engaged in Rubber Duck Debugging! This technique describes the concept of verbalizing your thoughts to an inanimate object — a rubber duck — and thereby realizing flaws in your thought process.
Rubber Duck Debugging is certainly useful for various domains besides only software debugging. Therefore, a rubber duck will serve as our guide through four phases of Generative AI (GenAI) usage, highlighting crucial considerations for safeguarding this innovative technology. The journey will follow principles outlined in the GenAI Security Scoping Matrix from AWS, ensuring a robust framework for navigating the landscape of GenAI usage.

Phase 1 — Launching a Smart Rubber Duck: Safeguarding Use of GenAI Consumer Apps

While a silent rubber duck serves its purpose, imagine the possibilities if it could actively assist in problem-solving. Let’s bring this vision to life by creating a smart rubber duck capable of providing actionable advice.
To speed up time to market we leverage GenAI. Beginning with public GenAI services, we’ll utilize PartyRock to create a roadmap for product development and to design promotional images for the smart duck on our website.
During this initial phase, it’s crucial to grasp the implications of service usage, particularly in handling inputs and outputs. By employing “off-the-shelf” GenAI solutions, you’ll adhere to standard contract terms rather than enterprise agreements.
Treat your prompts (e.g. your chat inputs) as public information. You shouldn’t input any confidential, proprietary or personal identifiable information. Also, use the generated outputs with care. It is ultimately you who is responsible for how and where the output is used.
Depending on your use case it is advisable to dive into the service provider’s terms of service and privacy policy. Clarify who can access the data and whether prompts contribute to model training. Keep in mind that terms of service and privacy policies may undergo revisions.
As you are using a consumer service at this stage, anticipate limited options regarding resilience given a dependency on the third-party service’s availability. Validate whether using a consumer service aligns with your business’s critical tasks and associated availability requirements.

Phase 2 — Building the First Talking Duck: Safeguarding Use of Enterprise GenAI apps

With your initial planning in place, you’re developing the first iteration of your smart duck product. This MVP aims to respond with a friendly “Hello world!” upon interaction. To speed up development you opt for a GenAI-powered coding companion – Amazon CodeWhisperer – to enable code generation based on existing code.
As you are moving towards a more professional use of GenAI at this stage you upgrade to the enterprise edition of the service, as this phase involves leveraging GenAI for a business-critical asset: the new code for your duck product.
While similar considerations to the previous stage apply, place additional caution on the risk assessment of third-party providers, focusing on:
  • Data residency
  • Privacy policies
  • Terms of service
  • End-user license agreement
The duck code will be an asset you want to safeguard. Ask yourself, what and where is data stored and processed? Given its future use in a marketed product, are you aware of the legal implications of using outputs (GenAI generated code) commercially? It is also advisable to exercise any opt-outs to avoid data being used for training.
These questions serve to heighten your awareness as you navigate the path to secure GenAI usage. Rather than viewing them as obstacles requiring significant effort to overcome, consider them as invaluable checkpoints guiding your journey. Enterprise versions of GenAI services often streamline the resolution of these concerns. For instance, with Amazon CodeWhisperer, you retain ownership of all code suggestions, and the professional tier ensures that no content is collected for service enhancement purposes by default.

Phase 3 — Empowering your Duck with Knowledge: Safeguarding Use of Foundational Models

You are one step closer to your vision. Your ducks are now able to react to audio input. To enable them to provide actually helpful responses, you integrate them with a pre-trained model. This could be through platforms like OpenAI or fully managed services like Amazon Bedrock which offer access to multiple foundational models. Alternatively, you might opt to host a model yourself.
While you are still consuming a third-party service, you’re transitioning from merely being a GenAI user to assuming the role of a GenAI builder. This signifies a shift in responsibility: in addition to managing your own data and intellectual property, you’re now accountable for safeguarding the data belonging to your end-users — the individuals talking to your ducks.
This shifts your responsibility from solely evaluating third-party risks to conducting threat modeling for your own application. With this increased control over data flows and the ability to implement governance rules for compliance, you also assume responsibility for ensuring the secure operation of your infrastructure.
It’s inevitable that users may disclose sensitive information during discussions with your ducks. Therefore, ensure this data isn’t utilized for additional model training or further exposed. When leveraging a GenAI model as a service, check if input data is used to train the model and ensure a clear view on ownership, copyright, and quality. For instance, in the case of Amazon Bedrock, prompts and continuations are not utilized for training AWS models or shared with third parties.
This is also the stage at which you should architect for resiliency. Timely availability of advice from the duck is essential for a positive user experience. Appropriate actions depend on your use of the foundation model. When consuming it through a third-party API, check for regional availability and compare SLAs with your own requirements. On the other hand, if you’re hosting the model yourself, plan for failover scenarios and test them to guarantee seamless operation.
During this phase, it’s also crucial to address 7 of the top threads for LLMs from the Open Web Application Security Project (OWASP):
  1. Prompt injection
  2. Insecure output handling
  3. Sensitive information disclosure
  4. Insecure Plugin Design
  5. Excessive agency
  6. Supply chain vulnerabilities
  7. Model denial of service
To overcome some of the considerations for safeguarding GenAI use at this stage you can leverage proven security measures similar to those used for traditional ML models. This may involve implementing an API Gateway for authentication and authorization, as well as for preventing overload through throttling. Additionally, deploying a Web Application Firewall can help to filter out malicious requests and block bots. However, since LLMs accept free text, filtering the payload becomes challenging. Therefore, it’s crucial to explore supplementary features tailored to your setup. e.g. Guardrails for Amazon Bedrock, which enables the evaluation of inputs and responses based on predefined policies, or the ability to block undesirable topics within your GenAI application. These supplementary measures can bolster the security posture of your system effectively.

Phase 4 — Tailoring your Duck to User Lingo: Safeguarding a Fine-tuned Model

In this last phase you are aiming to make your duck even better. While users are generally satisfied with the product, you’ve observed that those from specific industries use language nuances that your ducks struggle to comprehend. You decide to introduce a new duck line — the “Wall Street Duck” — tailored for users in the financial sector. The aim isn’t to integrate domain-specific knowledge (which calls for Retrieval Augmented Generation), but rather to adapt to financial jargon and improve the handling of specialized requests by fine-tuning a foundational model.
During this stage, it’s important to be extremely mindful about which data you use for model fine-tuning. Pay particular attention to sensitive or biased data, especially if you use chat histories for training purposes. Remember that the fine-tuned model inherits the data classification of the data used for its refinement. Therefore, avoid tuning on personal identifiable information to mitigate risks.
This will also help you address another consideration — the risk of data leakage. Data utilized in fine-tuning may potentially be extracted later, thereby become available to users with no right to this data.
At this stage three more threats from the top threads for LLMs from OWASP as mentioned above become relevant:
  • Training data poisoning
  • Model theft
  • Overreliance
In summary, this last phase expands the considerations addressed in the previous stages, such as the importance of constructing a resilient architecture, by placing an even greater emphasis on the sensible utilization of data.

Let’s recap the insights gained from the journey of our duck. When navigating GenAI apps, prioritizing responsible and safe utilization of this technology is paramount. Here’s a structured approach to consider:
  1. Identify Your GenAI Stage: Begin by identifying the specific type of GenAI you’re utilizing or developing, corresponding to the phase of the duck’s journey. Understanding your current position provides clarity on the necessary steps forward.
  2. Implement Relevant Controls: Once you’ve identified your GenAI stage, implement appropriate controls to address the considerations associated with its usage. These controls serve as safeguards against potential risks and ensure the integrity of your GenAI applications.
  3. Appoint a Governance Owner: While not tasked with executing every action, this individual ensures that necessary measures are brought to everyone’s attention and are being executed.
  4. Securing GenAI Apps is a Continuous Journey: Recognize that securing GenAI apps is an ongoing process, not a one-time endeavor. Regularly review and refine your controls to adapt to evolving threats and technological advancements.
By following these steps, you can navigate the landscape of GenAI apps with a focus on responsible and secure deployment, safeguarding both your data and your users’ privacy.
 

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

Comments