Devil in the data?
IN Partnership with
The AI, cybersecurity and privacy arenas all contain new legal risks for organisations, according to Holding Redlich, and the frantic pace of change means there is no time to ‘set and forget’
More
RAPID CHANGE in the digital economy is creating a fast-evolving risk landscape for legal clients. Whether it’s the explosion in AI systems, the exploits of cybercriminals, or the complexity of managing sprawling data estates, mitigating these risks can be challenging.
And as Holding Redlich general counsel Lyn Nicholson observes, the future is hard to predict. “Augmented reality and blended AI-human relationships are two things that we see coming in ways that I’m not sure we can predict at the moment,” she says.
So, what should clients do to minimise their current privacy, cybersecurity and AI risks? Nicholson and Holding Redlich partner Sarah Butler reveal it’s about better understanding these organisational risks and putting measures in place now to manage them.
Holding Redlich is a leading national Australian commercial law firm with significant expertise in the data and privacy space. With deep knowledge of data protection, privacy law and information governance, our team support clients to navigate complex regulatory frameworks and maximise their digital opportunity while managing risks. This includes advising in related areas of technology contracts, SaaS contracts and licensing and IP commercialisation.
Across our team in Melbourne, Canberra, Sydney, Brisbane and Cairns, we provide a range of legal services for clients of all sizes, including many of Australia’s largest public and private companies and all levels of government.
Find out more
“Organisations are not looking at where their biggest potential cybersecurity harms are and are leaving low-hanging fruit”
Lyn NICHOLSON,
Holding Redlich
Australia recently took a step forward in its modernisation of the now outdated Privacy Act 1988. Late in 2024, it legislated 23 of 25 ‘agreed’ proposals from the government’s response to the Privacy Act Review, in what’s expected to be a first tranche of privacy law reform.
Lyn says one fresh risk arising from these recent reforms is the new powers granted to the Office of the Australian Information Commissioner (OAIC). The OAIC can now bypass the courts to issue infringement notices for privacy breaches. The regulator can also conduct public inquiries to call out corporate behaviour that, though not unlawful, it believes warrants public attention.
“One of the risks we see here is an underfunded regulator looking to make a maximum impact using the resources they have,” Lyn says.
According to Holding Redlich, it’s unclear where the OAIC may apply these powers following a 2024 determination against retailer Bunnings for its blanket use of facial recognition technology.
“That determination has put industry on notice when it comes to utilising biometrics and facial recognition technologies,” Lyn says.
Organisations responsible for systemic failures when AI systems are in place could also be at risk. Lyn cites a recent incident in Melbourne when armed individuals were able to enter the MCG despite being flagged by an AI system as requiring secondary screening. “Those systemic failures are a big risk,” she says.
She adds that organisations should assess their adoption of technologies carefully, and not just facial recognition and biometrics. “These could be technologies that look like inexpensive solutions to a problem but are privacy invasive and need governance protections around them.”
Cybersecurity and the protection of customer data are top of mind following a number of high-profile local cyber breaches in recent years. These include the Optus and Medibank hacks, as well as more recent attacks, such as those this year involving local superannuation funds.
Lyn says organisations should consider directing more cyber spending towards areas facing the most likely harms, rather than simply funnelling the majority of their cyber budgets into defending their perimeter against cybercriminals.
“Every industry is at risk, but some will have low-hanging fruit,” she says. One example is the recent super fund cyber hack, where the absence of multi-factor authentication (MFA) may have played a role. MFA has long been considered an essential practice, both in Australia and internationally.
“Organisations should be looking at what the current AI frameworks and standards are, and using those to develop an AI governance framework now”
Sarah Butler, Holding Redlich
“Organisations are not looking at where their biggest potential cybersecurity harms are and are leaving that low-hanging fruit. We are on a journey of continuous improvement; it’s not set and forget. Whatever protections you had in place six months ago are probably not good enough today,” Lyn says.
Part of the security challenge, according to Sarah, is that many businesses are dealing with complex legacy systems. “A lot of businesses have outdated infrastructure that can’t easily be patched and protected using modern security tools,” she explains.
However, Sarah says this outdated infrastructure is creating significant exposures; though replacing these systems can be expensive, she says it’s “crucially important” in the current environment to keep practices like authentication up to date and in line with best practice.
Sarah recommends organisations audit their data and systems and challenge providers in their supply chain about their own security postures. “A lot of recent cyber incidents have involved third party suppliers – you need to maintain constant vigilance.”
While no legislation has been introduced to govern AI since ChatGPT shook up the space in late 2022, Sarah says the 2024 consultation on mandatory guardrails for high-risk AI, along with the introduction of voluntary standards, provides some indication of where regulation is heading.
Likewise, the EU AI Act, which has extra-territorial reach for Australian companies operating in that region, could influence the shape of regulatory efforts in Australia. That would follow a similar path to the GDPR regime, which has become a default global standard for data protection.
Share
Privacy: Face up to Australia’s newest privacy risks
Cybersecurity: Fix the low-hanging fruit first
Published 12 May 2025
Share
News
Practice Areas
Surveys & Reports
Events
Best In Law
Resources
Subscribe
AU
NZ
Copyright © 2025 KM Business Information Australia Pty Ltd
Companies
People
Newsletter
Authors
External contributors
About us
Privacy
Terms of Use
Contact Us
RSS
Copyright © 2025 KM Business Information Australia Pty Ltd
Companies
People
Newsletter
Authors
External contributors
About us
Privacy
Terms of Use
Contact Us
RSS
News
Practice Areas
Surveys & Reports
Events
Best in Law
Resources
Subscribe
AU
NZ
News
Practice Areas
Surveys & Reports
Events
Best in Law
Resources
Subscribe
AU
NZ
Copyright © 2025 KM Business Information Australia Pty Ltd
Companies
People
Newsletter
Authors
External contributors
About us
Privacy
Terms of Use
Contact Us
RSS
Source: Check Point Threat Intelligence, February 2025
APAC a target for cybercriminals
The Asia-Pacific region, which includes Australia, is currently experiencing cyber threats at a rate 60% higher than the global average
Organisations in the region faced 2,915 attacks per week over the six months to February 2025, compared to the global average of 1,843 attacks
52% of malicious files were delivered via web-based attacks in the past 30 days, showing a rise in phishing, while ransomware accounted for 6.3% of incidents
The most targeted sectors in the APAC region at present are education and research, healthcare, government and the military
AI: Establish AI governance ahead of legislation
Sarah says Australia could be looking at an AI Act like the EU’s, or could end up augmenting existing regulatory regimes such as those covering privacy and consumer protection.
She recommends that, if organisations don’t yet have an AI governance framework in place and are planning to deploy or develop AI, they should consider implementing one as soon as possible rather than waiting for new guardrails or actual legislation.
“Organisations should be looking at what the current AI frameworks and standards are and using those to develop an AI governance framework now, because that will put them in a good place to evolve their governance frameworks as we see what regulation is going to look like.”
Businesses are feeling significantly overregulated in the data and technology space, according to Lyn and Sarah, and some financial services organisations now need to provide over a dozen notifications under different laws if they are hit by a cyber breach.
With AI now in the sights of regulators, Lyn says there is a noticeable shift in the regulatory approach here and abroad towards encouraging industry guidelines and standards, which can be rolled out and adopted more easily by organisations.
Balancing regulation with innovation
Source: Australian Government, Introducing mandatory guardrails for AI in high-risk settings: proposal paper
Three Australian options for regulating AI
Domain specific
Existing regulatory frameworks would be adapted to include the Australian government’s proposed mandatory guardrails on AI
1
Framework
The Australian government would introduce framework legislation, with associated amendments to existing legislation
2
Whole of economy
A new cross-economy AI Act would be introduced to regulate AI, following the pathway pursued by the EU
3
She gives the example of the National Institute of Standards and Technology, part of the US Department of Commerce, whose standards for cybersecurity are actively looked to and adopted by Australian organisations alongside local security models like the Essential Eight.
“That seems to be the direction we are going in,” Lyn explains.
“Certainty in the form of legislation is a beautiful thing, but legislation takes a lot of time to draft. Getting legislation through is also difficult; one example of that is the Privacy Act reform process where, arguably, Australia did not get much bang for our buck at the end of the day.
“When voluntary standards are introduced and adopted by industry, that seems to work to a degree. Another cybersecurity best practice setting is ISO 27001, and many people will accept that as good industry practice. Do we have to have a law on that agreed set of standards?”
Sarah says a key question that arises when regulating a technology like AI is how to achieve a balance between protecting Australians from its potential harms and ensuring that local organisations remain competitive in AI development and adoption.
“We don’t want to stifle creativity and development, and that’s a hard balance to get right.”
Holding Redlich is witnessing a shift from IT being the sole domain of tech specialists towards an understanding that broader and deeper tech literacy is now required. Boards, for example, are starting to take greater responsibility for AI, data and technology governance.
While Lyn says this shift remains embryonic, AI has gained prominence in forums such as the Australian Institute of Company Directors. “AI has suddenly become a big component of the agenda, and it is clear boards are taking it on as an enterprise governance issue,” she says.
All aboard for good data governance
