Irish Authorities Levy GDPR Fine in Centric Health Breach
DPC says that while Centric stated in its initial breach notification that
70,000 data subjects were affected by the breach, it only issued notifications
to the 2,500 individuals whose data was irretrievably lost in the incident.
Besides the inadequate breach communication to affected individuals, the fine
levied against Centric also reflects a variety of other GDPR infringements,
including "failure to implement technical and organizational measures
appropriate to the level of risk" posed to personal and special category data on
Centric's server. "The failure to implement the necessary safeguards in an
effective manner at the appropriate time led to the possibility of patients'
personal data being erroneously disclosed to unauthorized people," the report
says. Centric, in a statement provided to Information Security Media Group, says
that at the time of the cyberattack, it immediately informed the DPC and
cooperated fully with the investigation. "We want to assure our patients that we
take our responsibility to protect their data and ensure the security of our IT
systems very seriously," Centric says.
Gitpod flaw shows cloud-based development environments need security assessments
"Many questions remain unanswered with the adoption of cloud-based development
environments: What happens if a cloud IDE workspace is infected with malware?
What happens when access controls are insufficient and allow cross-user or even
cross-organization access to workspaces? What happens when a rogue developer
exfiltrates company intellectual property from a cloud-hosted machine outside
the visibility of the organization's data loss prevention or endpoint security
software?," the Snyk researchers said in their report, which is part of a larger
project to investigate the security of CDEs. ... In fact, CDEs are in many ways
a big improvement over traditional IDEs: They can eliminate the configuration
drift that happens over time with developer workstations/laptops, they can
eliminate the dependency collisions that occur when developers work on different
projects, and can limit the window for attacks because CDE workspaces run as
containers and can be short-lived.
Responsible AI: The research collaboration behind new open-source tools offered by Microsoft
Through its Responsible AI Toolbox, a collection of tools and functionalities
designed to help practitioners maximize the benefits of AI systems while
mitigating harms, and other efforts for responsible AI, Microsoft offers an
alternative: a principled approach to AI development centered around targeted
model improvement. Improving models through targeting methods aims to identify
solutions tailored to the causes of specific failures. This is a critical part
of a model improvement life cycle that not only includes the identification,
diagnosis, and mitigation of failures but also the tracking, comparison, and
validation of mitigation options. The approach supports practitioners in
better addressing failures without introducing new ones or eroding other
aspects of model performance. “With targeted model improvement, we’re trying
to encourage a more systematic process for improving machine learning in
research and practice,” says Besmira Nushi, a Microsoft Principal Researcher
involved with the development of tools for supporting responsible AI.
Now Microsoft has a new AI model - Kosmos-1
The researchers also tested how Kosmos-1 performed in the zero-shot Raven IQ
test. The results found a "large performance gap between the current model and
the average level of adults", but also found that its accuracy showed
potential for MLLMs to "perceive abstract conceptual patterns in a nonverbal
context" by aligning perception with language models. The research into "web
page question answering" is interesting given Microsoft's plan to use
Transformer-based language models to make Bing a better rival to Google
search. "Web page question answering aims at finding answers to questions from
web pages. It requires the model to comprehend both the semantics and the
structure of texts. The structure of the web page (such as tables, lists, and
HTML layout) plays a key role in how the information is arranged and
displayed. The task can help us evaluate our model's ability to understand the
semantics and the structure of web pages," the researchers explain.
How AI can improve quality assurance: seven tips
One of the areas where AI is proving its worth for quality assurance is in the
software development sector. AI seems particularly well-suited to regression
testing. That approach requires checking to ensure previously tested versions
of software keep working as expected following code modifications. Or, AI
could help create new test cases. Some AI models can recognise or come up with
scenarios without prior exposure to them. If you’re thinking about using AI
for testing help, confirm which processes that typically take humans the
longest to do or where the errors happen most often. Then, assess whether AI
might avoid some of those issues and speed up the steps testers typically go
through when verifying all is well with new software. Also, keep in mind that
using AI for software testing works best when you have a large data set.
That’s why training your AI models thoroughly is so necessary, and not a step
to take hastily.
Edge Computing Eats the Cloud?
Additionally, Sedoshkin says that smartphones are “more compact” than a set of
GPUs and peripheral components make more sense in an R&D lab environment.
He predicts this trend will continue to intensify. “Many real-world
applications require the usage of a smartphone anyway, and these devices are
capable of running pre-trained neural networks on edge. Smartphone
manufacturers will continue increasing computational power and memory capacity
on edge devices. However, R&D labs will use specialized hardware for
training and testing AI/ML algorithms, and DIY enthusiasts will use
specialized lightweight chipsets," Sedoshkin says. In short, there is little
to stop the encroach of edge computing on the cloud’s lofty turf. There isn’t
much friction to slow it down, either. “The future of edge computing is an
evolving landscape; however, ‘ubiquitous’ is the best word that describes it
because it will evolve to be all around us,” Tiwari says. And by ubiquitous,
industry watchers say they literally mean everywhere.
4 tips to freshen up your IT resume in 2023
Every IT hiring manager looks for professionals who are passionate about their
work. And what better way to show this than by discussing your passion
projects? In your resume’s contact information section, add a link to any
outside projects you’ve worked on over the years, casual or professional.
Remember that these don’t need to be overly complex or high-tech – the point
is to show you’re passionate about technology outside of work. Even if your
contributions involve small edits or suggestions to other people’s code,
include them on your resume. That said, your profiles must be up to date. If
you haven’t updated it in years, don’t include it. Keeping your IT resume
updated and relevant in 2023 is crucial for job seekers in the competitive
technology industry. And while many IT professionals get job offers without an
optimized resume, an exceptional resume might just be what stands between you
and your top-choice company.
The role of human insight in AI-based cybersecurity
Traditional cybersecurity solutions, like secure email gateways (SEGs), rely
on pre-defined rules and patterns to identify potential threats. However,
these rules and patterns can become outdated quickly, leading to a high rate
of false positives and false negatives. Sophisticated phishing attacks can
also evade SEG systems as they impersonate known trusted senders or takeover
accounts. By using RLHF, the model can learn from human feedback and
continuously adapt to new threats as they emerge. Enterprise security teams
spend as much as 33% of their time dealing with phishing scams. Since
traditional cybersecurity solutions often rely on manual processes, this leads
to delays in detecting and responding to potential threats. By combining AI
and RLHF, teams can better identify potential threats, resulting in up to a
90% reduction in the amount of time needed to identify and react to phishing
scams, while also significantly reducing the organization’s risk posture.
Biden's Cybersecurity Strategy Calls for Software Liability, Tighter Critical Infrastructure Security
The requirements will be performance based, adaptable to changing
requirements, and focus on driving adoption of secure-by-design principles.
"While voluntary approaches to critical infrastructure security have produced
meaningful improvements, the lack of mandatory requirements has resulted in
inadequate and inconsistent outcomes," the strategy document said. Regulation
can also level the playing field in sectors where operators are in a
competition with others to underspend on security because there really is no
incentive to implement better security. The strategy provides critical
infrastructure operators that might not have the financial and technical
resources to meet the new requirements, with potentially new avenues for
securing those resources. Joshua Corman, former CISA chief strategist and
current vice president of cyber safety at Claroty, says the Biden
administration's choice to make critical infrastructure security a priority is
an important one.
Interactive Microservices as an Alternative to Micro Front-Ends for Modularizing the UI Layer
Interactive microservices are based on a new type of web API that Qworum
defines, the multi-phase web API. What differentiates these APIs from
conventional REST or JSON-RPC web APIs is that endpoint calls may involve more
than one request-response pair, also called a phase. ... Unbounded
composability — Interactive microservices can call other end-points and even
themselves during their execution. The maximum depth of allowed nested calls
is unbounded, and each call disposes of a full-page UI regardless of nesting
depth. This is unlike micro front-ends which typically cannot be nested beyond
1 or 2 levels at most, because the UI surface area that is allocated to each
micro front-end becomes vanishingly smaller with increasing nesting depth.
General applicability — Qworum services are more generally applicable for
distributed applications than micro front-ends, as the latter are generally
tied to a particular web application (ad hoc micro front-ends), front-end
framework (React micro front-ends, Angular micro front-ends etc) or
organisation.
Quote for the day:
"When building a team, I always search
first for people who love to win. If I can't find any of those, I look for
people who hate to lose." -- H. Ross Perot
No comments:
Post a Comment