Safeguarding data, privacy and reputation: best practices in the AI era

December 27, 2023

Share on LinkedIn Share on Facebook Share on X

As our world becomes more data-driven, safeguarding data and privacy has become a growing focus for businesses and the regulatory agencies that set industry standards. Last year, Gartner forecasted that by the end of 2024, 75% of the world’s population will have its personal data covered under modern privacy regulations. But there are still gaps in regulatory frameworks and areas where businesses and consumers alike should be particularly careful with privacy as technology innovation pushes forward. 

Over the next year and even into 2025, the interaction between privacy and artificial intelligence (AI) technology will be a significant focus for leaders in their tech and data applications.

In this blog, I’ll highlight three best practices for global businesses to safeguard data, privacy and reputation as 2024 approaches. 

Set a privacy approach as the default

Across industries, we’re seeing that leaders are starting to approach privacy as a standard embedded practice, rather than a tick-box exercise. This mindset has escalated since popular generative AI tools like ChatGPT have become easily accessible to the public.

Initially, at the onset of public use of these tools, companies tried to implement widespread bans of these tools to protect privacy. In Italy, for example, ChatGPT was banned for a short period of time, though the ban has since been lifted. 

With the proliferation of AI-powered tools, employees will be able to access these tools more regularly. After all, banning technology isn’t going to stop people from using the technology, nor will it sufficiently keep people safe.

As a result, companies need to first understand and accept that their employees and colleagues will find ways to access technology outside of the work environment. Therefore, by default they need to put human centric privacy principles in place for AI technology or any technology that endangers user privacy.

What’s helpful for leaders embracing privacy principles is that privacy laws tend to be technology neutral, meaning that the same foundational privacy principles can be applied regardless of this ever-evolving technological landscape. For instance, generative AI tools like ChatGPT didn’t exist when GDPR was put into place in the EU several years ago. But companies can apply similar GDPR principles by default to protect personal data when using generative AI and users will have much more secure, robust and familiar guidelines to follow.

The responsible use and protection of personal data or sensitive data starts with providing people with principles and resources to keep them knowledgeable and data safe. 

Personalize technological experiences

Externally, customers are very aware of AI as a growing part of how business is practiced. AI is fully immersed in the public domain and the general public, on the whole, has knowledge and awareness about the importance of keeping their personal data safe.

As a result, customers today are looking for personalized experiences built into technology tools that gives them more access and control over their own data privacy. They want resources to be at their fingertips to check, alter or object to things like consent, marketing, cookies and privacy settings. 

Leaders building a privacy-centric approach to data should be intentional about how customer and user experience is built within their technology tools. Customers immediately see the benefit of being able to securely access and control their information, track progress and upload supporting information from anywhere, and it isn’t a one-way street. Having customers manage their own requests can significantly decrease the demands on your internal resources in areas such as customer service, privacy and legal. It can also limit the risks associated with sharing and transferring data through traditional mediums, such as post and email. These types of portals and tools for consumers is very likely the way that the future is headed, so companies that get ahead of the curve and start integrating thoughtful interfaces with consumer data privacy will build their trust and reputation with consumers.

Offer training and educational resources

As companies grow, business data increasingly needs to exist in a borderless digital world. The free flow of data is the next biggest challenge for many organizations. Yet modern privacy regulations may vary state by state or country-by-country, depending on where a company, or its customers’ data, is based. 

To keep up with the increasing flow of data, businesses are expanding their homegrown teams rapidly to meet challenges that may arise with local expertise. Businesses need to take a human-centric approach to growing their privacy capabilities as business expands. Safe stewardship of data privacy also means that companies need to provide awareness resources and, in certain circumstances, training about the responsible use of data and AI. 

Internally, that means offering education and support for colleagues at all levels regarding adherence to local privacy laws and internal guidelines, understanding global privacy restrictions, transfer limitations, maintaining reputational trust, and protecting information security. 

Externally, that means communicating with clients and customers transparently about how their data will be used, where it is stored and what steps businesses are taking to ensure their data privacy remains secure. Companies that communicate clearly about privacy and data use in clear and transparent terms –– will see more success in their privacy programs and build consumer trust along the way.

Implementing privacy by design and educating colleagues at the outset is key to not only maintaining program success but replicating and expanding those practices into multiple countries and jurisdictions.

A company’s reputation is built on upholding the promises of privacy, trust and security that its stakeholders – colleagues and customers – expect their data to be safe. If a company has a vulnerability in any of these core competencies, there will be a negative consequence on brand reputation. 

At Sedgwick, we’re focused on overcoming the challenges posed by a borderless digital world and building privacy centric functions into our technology so that no matter who’s using it or where they are based, our stakeholders feel safe in the knowledge their data is safeguarded.

Data privacy and protection: balancing your approach to cybersecurity risk

January 27, 2023

Share on LinkedIn Share on Facebook Share on X

By Eric Schmitt – chief global information security officer and Brenda G. Corey – SVP compliance & regulatory

In a world increasingly concerned with privacy and protection, companies must balance their awareness of risk with compliance amid rapidly changing regulations.

From a data protection standpoint, over the past 24 months, there has been an increased emphasis on ensuring data is retained only for the period it is needed, or as required by law. With transparency and data rights laws now active in two U.S. states (California CPRAVirginia) and taking effect in three additional U.S. states during 2023 (ColoradoConnecticutUtah), now is the time for companies to assess their infrastructure, isolate areas of potential exploit by bad actors, and educate employees on best practices for protecting sensitive data.

Records retention

A big area of focus is full compliance with a record retention schedule. The record retention schedule is vital to ensure that we’re retaining data only for the period needed, reducing risk by decreasing the data stored, and to comply with emerging legislation. Companies around the world are on this journey today and are revalidating their existing policies to ensure compliance. It’s important to ensure records retention obligations are met for multiple stakeholders – statutory, client, and insurance carrier – and in specific jurisdictions as well as on a global level.

Cyber resilience

On the tech side of the business, it’s important for cybersecurity, backup, and disaster recovery teams to come together and provide a more unified program under the banner of “cyber resilience.” This level of partnership helps to ensure that continuity plans, including business and technology, take into account how to implement protections in the event of a cyberthreat, allowing an organization to quickly respond to emerging threats. Companies should be making certain that their continuity program includes cyber-related issues.

Threat hunting

Armed with the mission of “breaking yourself before somebody else does,” cybersecurity teams look to attack an organization’s own cyber environments in the same way a bad actor might – a process called threat hunting. This gives visibility to not only spot the pain points where attacks may occur, but to build a quicker response so backup data can be protected to ensure not all is lost in the event of a threat. Threat hunting should supplement a robust vulnerability and penetration testing program, not replace. There are two large benefits to threat hunting – your defenders learn to identify attacks as they work with the threat hunters, and the company can help identify areas that may need additional controls to be applied.

Setting up a line of defense

You have to know what you have before you can protect it. By data-mapping all lines of business and the types of data flowing across them – including what vendors share that info – you can get a clear picture of how and where data is secured. Using the MITRE “crown jewel exercises” enables highlighting vulnerabilities around data to protect, so defenses can be layered accordingly.

Colleague education is another tier of optimal data privacy and protection efforts. When it comes to cybersecurity risk, your people are your first and last line of defense. The question of how employees can be better educated to positively identify inbound threats, such as phishing emails, and other malicious activities – and how to reinforce this behavior positively – should always be top of mind. Phishing email training exercises should be done on a regular basis for the entire organization. Colleagues on teams that constantly handle sensitive data may need more frequent assessments for data breach prevention.

In the claims industry, privacy officers work to ensure data rights requests are addressed quickly and efficiently for individual claimants. In harmony with privacy laws, artificial intelligence may be leveraged to provide better services to individuals, such as in the case of automated claim reviews.

Privacy by design

Data privacy and security can be a differentiator for a company and its clients when it’s “baked” into investment and operations strategy. As a company builds out its new process and programs, including the flow of information within the system, it’s essential that teams on the front-end know how to tackle privacy by design. Regulatory agencies are making a heavier push toward reducing the footprint of data; businesses must pay due diligence by asking deep questions about their data security programs and weighing their investment in threat intelligence.

Claim farming for damages in light of data protection laws

September 16, 2021

Share on LinkedIn Share on Facebook Share on X

By Eur Ing Mark Hawksworth, Global technology specialist practice group leader and executive MCL adjuster

As new data protection laws are introduced, so are new challenges.

Claim farming occurs when parties submit claims for financial gain against targets unaware of data misuse and is on the rise in the UK. After navigating hardship related to COVID-19 and the increasing number of cyberattacks, UK businesses are now facing an additional exposure of claim farming using Regulation 6 of the Privacy and Electronic Communications (EC Directive) Regulations 2003 (PECR). In the claims we have received so far, businesses weren’t aware of their exposure, which makes building awareness and creating a plan so critical for policyholders to protect themselves.

Compliance concerns

In some cases, businesses may be unaware that they are liable for misuse of personal data. If an individual visits their web page and notices that tracking cookies have been downloaded, they may submit a claim that the web page does not comply with regulations. The challenge is determining whether the individual making the claim deliberately sought out that web page for their own financial gain.

Third-party claims related to persistent tracking cookies being placed on personal devices without the consent of the owner are being mass generated. The argument is that tracking cookies are an intrusive contrary to GDPR and as no consent was given to placement of the tracking cookie, the claimant can seek financial compensation.

Serial claimants’ correspondence usually quotes Regulation 2(1) of Regulation 6 of the Privacy and Electronic Communications (EC Directive) Regulations 2003 (PECR), which references “consent by a user or subscriber corresponds to the data subject’s consent in the GDPR.” Recital 32 of the GDPR states: “consent should be given by a clear affirmative act establishing a freely given, specific, informed and unambiguous indication of the data subject’s agreement”.

The process for policyholders

If a breach is detected based on the guidelines above, the policyholder is notified of the installation of tracking cookies and is invited to submit a claim under any appropriate Insurance cover they have in place. A claim is passed to insurers, then to a handler who is provided with evidence of both the installation and the persistence of the tracking cookies, usually in the form of a video recorded from the claimant’s device.

The claim may reference:

  • allegations/evidence
  • the alleged web page installing tracking cookies
  • proof of persistence of the tracking cookies
  • how the tracking cookies create a unique identifier, which tracks internet behaviour in contravention of Regulation 6(1) of PECR
  • failure to provide the claimant with clear and comprehensive information about the purposes of these cookies, in contravention of Regulation (6)(2)(a) of PECR
  • failure to obtain consent to use the cookies in contravention of Regulation 6(2)(b) of PECR;
  • failure to process personal data in a fair, lawful and transparent manner in contravention of Article 5 of the GDPR

During claim discussions, the threat is made that if the issue is not resolved to the satisfaction of the claimant, details will be passed to the Information Commissioner Office (ICO). The ICO can exercise their enforcement functions under Regulation 32 of the PECR, that a personal liability for breaches of PECR exists by virtue of Regulation 2(3) of The Privacy and Electronic Communications (Amendment) Regulations 2018. Correspondence usually concludes with the request for a payment in the form of financial compensation for the above lack of consent following receipt of which notification to the ICO will not be pursued.

For insurers, brokers and policyholders, it’s essential to understand the impact of not having cookie consent acceptance on web pages. Whether the business is aware or not, they may still be at fault, and as such, responsible for providing financial compensation to the claimant. Building awareness around claim farming and creating a plan can protect innocent parties from third-party claims such as this.