Wrongful Collection of Data Explained
This article is from RISQ Consulting’s Zywave client portal, a resource available to all RISQ Consulting clients. Please contact your Benefits Consultant or Account Executive for more information or for help setting up your own login.
Businesses of all sizes and sectors may be subject to unlawful data processing claims. According to the International Association of Privacy Professionals, lawsuits focusing on whether businesses lawfully collect and use personal data have been steadily increasing. These claims can cause significant financial and reputational damage to companies.
As businesses analyze the risks associated with personal data collection, they must be familiar with an evolving regulatory landscape and take steps to address their exposures. This article provides more information on what wrongful data collection is and areas of concern. It also provides tips for businesses to mitigate the risks associated with wrongful data collection.
What Is Wrongful Data Collection?
What constitutes wrongful, or unlawful, data collection varies by jurisdiction. While there currently isn’t an overarching national consumer data privacy law in the United States, several states have enacted legislation that affords individuals those protections. Aspects of U.S. laws also apply to certain sectors (e.g., the Health Insurance Portability and Accountability Act, or HIPAA, applies to health care) and individuals (e.g., children receive data protection through the Children’s Online Privacy Protection Act). Additionally, different laws are in place internationally. This range of legislation can make it difficult for businesses to understand the various rules that are in effect.
Even though it may be complicated, businesses have the duty to comply with applicable data privacy laws. For example, depending on the jurisdiction, there may be regulations that dictate how or if an organization may collect, use and share personal data. There may also be requirements for the business to inform consumers that data is being collected and to allow the consumer to opt out of that collection. Failure to adhere to relevant laws may be considered wrongful and businesses may be subject to fines and potential litigation.
Areas of Concern
Certain aspects of personal data collection are areas of concern. Examples of areas laws may regulate include:
- Biometric data—Collection of data regarding unique physical characteristics (e.g., fingerprints, faces, voice patterns) has been regulated by some jurisdictions. For example, Illinois has enacted the Biometric Privacy Act, which forbids businesses from collecting biometric data unless the business has informed the individual about the data being collected, provided information on how long it will be stored and received written consent.
- Pixel tracking—The use of pixel technology to track how individuals use websites to target advertisements may be subject to regulations. For example, under the European Union’s General Data Protection Regulation, pixel tracking technology may only be used if an individual consents, while the California Privacy Rights Act (CPRA) requires users to be notified of the implementation of pixels and how they will be processed.
Additionally, the United States Video Privacy Protection Act (1988), originally enacted to prevent the disclosure of personal information obtained from renting videos, has seen a modern application in lawsuits involving data collected through pixel tracking. Furthermore, HIPAA can be used to safeguard patients’ confidential health data that may be exposed to third parties utilizing pixels.
- Genetic information—Data that is compiled from the analysis of a person’s biological sample and involves genetic material (e.g., DNA, genes, chromosomes) may also be subject to regulations. For example, the Genetic Information Privacy Act in California provides its residents with rights and protections over their data when they use direct-to-consumer genetic testing companies.
- Precise geolocation—There may be legal obligations regarding collecting and processing data that is used to locate a consumer within a specific area. For example, the CPRA requires individuals to receive notice and the right to limit the use and disclosure of that precise geolocation information.
Risk Mitigation Strategies
It is essential for businesses to implement risk management strategies to reduce the likelihood of lawsuits, reputational damage, and regulatory fines and penalties stemming from wrongful data collection claims. Examples of techniques to consider include:
- Weigh the benefits and drawbacks of data collection and determine if alternative marketing strategies that do not require data collection exist.
- Provide notice and obtain consent before collecting, processing, using, sharing or selling personal data.
- Allow individuals to opt out of having their personal data collected.
- Limit personal data collection to only what is necessary.
- Monitor regulations as they are quickly evolving.
- Conduct audits of data collection practices to ensure they conform to applicable regulations.
- Provide education to employees on proper technology use and applicable legislation.
- Review insurance coverage with a licensed professional to determine if coverage is available for wrongful data collection claims.
Conclusion
Claims of wrongful data collection are rising, and businesses should take steps to mitigate their exposure to this risk. For more information and risk management guidance, contact us today.
- Published in Blog
Assessing the Viability of AI as a Self-diagnosis Tool
This article is from RISQ Consulting’s Zywave client portal, a resource available to all RISQ Consulting clients. Please contact your Benefits Consultant or Account Executive for more information or for help setting up your own login.
Artificial intelligence (AI) has created revolutionary advances across many industries. Now, it’s paving its way as a tool to self-diagnosis medical conditions or get answers to health-related questions. Self-diagnosis is a growing practice, as people’s primary access point for health care information has shifted from professionals to the internet. Especially when you’re having trouble getting an appointment, the internet has proven itself as a fast, easily accessible and free source of information. Given the internet’s popularity in answering some of your most urgent health-related questions, you may wonder how AI can help. Keep in mind that while AI is new and exciting, it’s not a replacement for professional health care.
This article explores the use of generative AI for medical self-diagnosis and its benefits, limitations and viability.
Generative AI for Health Care
Generative AI is a type of technology that produces text, images, audio or other content. With the introduction of AI chatbots, more people may be turning to them to answer their health-related questions. Some common tools used for this purpose include OpenAI’s ChatGPT and Google’s Med-PaLM. These types of large language model (LLM) chatbots can predict the next word in a sequence to answer questions in a human-like style.
Amid a shortage of health care workers, bots could help answer your questions. Initial tests by researchers so far suggest these AI programs are more accurate than a standard Google search.
The Pros
AI tools can potentially reduce medical costs for patients and health care providers. Here are some more potential benefits of using generative AI for medical self-diagnosis:
- Increased accessibility
- Quicker triaging
- Boosted health literacy
- Preserved anonymity
All of these factors contribute to an enhanced patient experience and improved engagement. Chatbots are also considered easier to use than online symptom checkers.
The Cons
While generative AI has great potential, it’s important to understand that there are also some limitations and pitfalls, including the following:
- False information
- Misinterpretation of information
- Ethical concerns (e.g., data privacy and bias)
- Risk of ignoring medical advice
Due to these risks, some LLM chatbots include disclaimers that they shouldn’t be used to diagnose serious conditions, provide instructions for curing conditions or manage life-threatening issues.
Using Generative AI in Medical Self-diagnosis
While generative AI tools may help you quickly answer health-related questions and self-diagnosis conditions, relying solely on them could be unsafe. Similar to their use in other applications, AI tools are meant to be complimentary and an additional source of information. They are great sources for general information and help simplify it so you can be an educated health care consumer.
Generative AI is not a replacement for medical advice from a professional, but it can be used to supplement professional medical advice. If you plan to use AI to answer your nonurgent health-related questions, consider the following best practices:
- Be aware of the potential ethical concerns of AI-driven health care, such as data privacy.
- Verify the AI information with trusted medical sources.
- Consult a health care professional for conclusive diagnoses and treatment plans.
The Future of AI-assisted Self-diagnosis
According to data from business consultant Accenture, health care AI applications could save up to $150 billion annually for the U.S. health care economy by 2026. AI offers numerous potential benefits, but it’s important to recognize the limitations and concerns associated with medical self-diagnosis. Health care providers will likely strive to harness AI’s power instead of solely relying on it. By layering AI into health care systems and making them user-friendly, providers can gain access to insights to provide better care.
AI is in the early stages of its development. However, as it advances, the future of medical self-diagnosis will likely involve even greater collaboration between AI developers and health care providers.
Summary
In today’s digital world, it’s easy to become overwhelmed when researching health-related information. Obtaining accurate health advice and information comes down to using all available sources but understanding their limitations. LLM chatbots could take provider-AI collaboration and diagnosis to the next level, but it has yet to be seen.
While generative AI is not meant to replace professional health care, it can be a good supplementary source and help you increase your health literacy and get answers quicker. Contact your doctor for the most accurate and personalized health care information and guidance.
- Published in Blog