What Every CEO Needs To Know About The New AI Act
The act says “AI should be a human-centric technology. It should serve as a tool
for people, with the ultimate aim of increasing human well-being.” So it’s good
to see that limiting the ways it could cause harm has been put at the heart of
the new laws. However, there is a fair amount of ambiguity and openness around
some of the wording, which could potentially leave things open to
interpretation. Could the use of AI to target marketing for products like fast
food and high-sugar soft drinks be considered to influence behaviors in harmful
ways? And how do we judge whether a social scoring system will lead to
discrimination in a world where we’re used to being credit-checked and scored by
a multitude of government and private bodies? ... The act makes it clear that AI
should be as transparent as possible. Again, there’s some ambiguity here—at
least in the eyes of someone like me who isn’t a lawyer. Stipulations are made,
for example, around cases where there is a need to “protect trade secrets and
confidential business information.” But it’s uncertain right now how this would
be interpreted when cases start coming before courts.
What’s behind Italy’s emergence as a key player in Europe’s digital landscape?
Regional cloud providers can respond promptly to needs that hyperscalers do not
meet, equipped with more flexible offerings, highly customized services, and
attention to local specificities. These are increasingly popular and insistent
demands from businesses that require greater flexibility and customization of
cloud services to adapt to their specific needs and a widespread presence in
particularly strategic geographical regions to offer services that better meet
local or sectoral needs. As a result, regions like Italy are increasingly
becoming preferred cloud regions, and the data center sector is taking the same
parallel path, which sees Italy as Europe's newest data hub. Credit is also due
to local providers breaking away from the 'one size fits all' dynamic, offering
tailor-made and ad hoc services for the needs of companies migrating to the
cloud. ... Combined with the geographic benefits of being based in Italy, the
current socio-economic climate, and the focus on regulatory compliance, Italy is
well-positioned to solidify its place as a significant player in the future of
the European cloud and data center scene.
Customer science: A new CIO imperative
Science is defined by many as the rigorous and systematic identification and
measurement of phenomena. In both the for-profit and nonprofit sectors the most
important phenomenon is customer behavior and mindset. Customer science puts
customer behavior and mindset under a microscope. Is your organization good at
customer science? Does your organization measure customer experience? Does your
organization employ “scientists” to observe and explain customer behavior based
on the data you have collected? ... The path to customer science is fraught with
paradoxes. The organizational paradox is that if the “Customer is King” why is
there no one in the enterprise with the authority to ensure that every
interaction meets or exceeds expectations. Is this the role of the now very much
in vogue chief customer officer? The chief experience officer? Glenn Laverty,
now retired and former president and CEO at Ricoh Canada, finessed this
responsibility/authority paradox tying every employees’ compensation to customer
experience/satisfaction metrics. What gets measured and what gets rewarded drive
behavior.
Enhancing Secure Software Development With ASOC Platforms
There are many ways to adopt DevSecOps. For those looking to avoid complicated
setups, the market offers ASOC-based solutions. These solutions can help
companies save time, money, and labor resources while also reducing the time to
market for their products. ASOC platforms enhance the effectiveness of security
testing and maintain the security of software in development without delaying
delivery. Gartner's Hype Cycle for Application Security, 2021, indicated that
the market penetration of these solutions ranged from 5 to 20% among the
intended clients. The practical uptake of this technology is low primarily
because of limited awareness about its availability and benefits. ASOC solutions
incorporate Application Security Testing (AST) tools into existing CI/CD
pipelines, facilitating transparent and real-time collaboration between
engineering teams and information security experts. These platforms offer
orchestration capabilities, meaning they set up and execute security pipelines,
as well as carry out correlation analysis of issues identified by AST tools,
further aggregating this data for comprehensive insight.
The cybersecurity skills shortage: A CISO perspective
Experienced cybersecurity professionals are poached daily, enticed with higher
compensation and better working situations. Successful CISOs keep an eye on
employee satisfaction and make sure to help staff manage stress levels. Active
CISOs also open avenues for staff to grow their skill sets and career
opportunities. ... There’s no reason why cybersecurity staff should be underpaid
or underappreciated. Proactive CISOs educate the brass on competitive salary
comparisons and risks/costs associated with understaffed teams and employee
attrition. When it comes to cybersecurity staffing, executives must understand
the foolishness of tripping over dollars to pick up pennies. ... How do you
bolster staff efficiency without adding more bodies? Automate any process that
can be automated. Automating security operations processes is a good start, but
advanced organizations move beyond security alone and think about process
automation across lifecycles that span security, IT operations, and software
development. Examples could include finding/patching software vulnerabilities,
segmenting networks, or DevSecOps programs.
Misaligned upskilling priorities could hinder AI progress
“The rapid rise of AI requires business leaders to build and shape the future
workforce now to thrive or risk lagging behind in a future transformed by a
seismic shift in the skills needed for the era of intelligence,” said Libby
Duane-Adams, Chief Advocacy Officer at Alteryx. “Not all employees need to
become data scientists. It’s about championing cultures of creative
problem-solving, learning to look at business problems through an analytic lens,
and collaborating across all levels to empower employees to use data in everyday
roles. Continuous investments in data literacy upskilling and training
opportunities will create the professional trajectories where everyone can
“speak data” and exploit AI applications for trusted, ethical outcomes.” “As
India invests US$1.2 billion in a wide range of AI projects, the country’s is
set to become a significant force for shaping the future of AI” said Souma Das,
Managing Director, India Sub-continent at Alteryx. “As organisations gear up for
the future, our research highlights how imperative it is to nurture a diverse
workforce with a range of data and analytics abilities to ensure employees are
empowered to navigate the dynamic landscape together.
Want to be a DevOps engineer? Here's the good, the bad, and the ugly
"The DevOps ecosystem is huge and constantly evolving," he added. "Tools and
frameworks so popular yesterday may be replaced by new alternatives. On top of
your regular job as an engineer, you probably need to give up some of your free
time for studying." Even when you gain more experience, "the learning doesn't
stop," Henry said. "In fact, it's commonly noted as one of the things that
DevOps engineers love most about their job. With the pace of development and
introduction of AI tools like ChatGPT, DevOps engineering today won't be the
same as DevOps engineering two or three years from now." One aspect that may
separate passionate DevOps engineers from other colleagues is the infrastructure
management part of the job. "If you're not a fan of managing infrastructure,
you're going to struggle," Henry cautioned. "This is a big one. As a DevOps
engineer, I spend a huge amount of time setting up, configuring, and maintaining
the cloud infrastructure that supports various applications. This means dealing
with servers databases networks and security on a daily basis. Now, if this
excites you, great. This world could be perfect."
Decoding AI success: The complete data labeling guide
Data labeling is essential to machine learning data pre-processing. Labeling
organizes data for meaning. It then trains a machine learning model to find
“meaning” in new, relevantly similar data. In this process, machine learning
practitioners seek quality and quantity. Because machine learning models make
decisions based on all labeled data, accurately labeled data in larger
quantities creates more useful deep learning models. In image labeling or
annotation, a human labeler applies bounding boxes to relevant objects to label
an image asset. Taxis are yellow, trucks are yellow, and pedestrians are blue. A
model that can accurately predict new data (in this case, street view images of
objects) will be more successful if it can distinguish cars from pedestrians.
... Locating and training human labelers (annotators) starts data labeling
projects. Annotators must be trained on each annotation project’s specifications
and guidelines because use cases, teams, and organizations have different needs.
After training, image and video annotators will label hundreds or thousands of
images and videos using home-grown or open-source labeling tools.
4 steps to improve root cause analysis
It’s easier for devops teams to point to problems in the network and
infrastructure as the root cause of a performance issue, especially when these
are the responsibility of a vendor or another department. That knee-jerk
response was a significant problem before organizations adapted devops culture
and recognized that agility and operational resiliency are everyone’s
responsibility. “The villain when there are application performance issues is
almost always the network, and it’s always the first thing we blame, but also
the hardest thing to prove,” says Nicolas Vibert of Isovalent. “Cloud-native and
the multiple layers of network virtualization and abstraction caused by
containerization make it even harder to correlate the network as the root cause
issue.” Identifying and resolving complex network issues can be more challenging
when building microservices, applications that connect to third-party systems,
IoT data streams, and other real-time distributed systems. This complexity means
that IT ops need to monitor networks, correlate them to application performance
issues, and perform network RCAs more efficiently.
From Chaos to Clarity: Streamlining DevSecOps in the Digital Era
No development team deliberately sets out to build and deploy an insecure
application. The reason applications with known vulnerabilities are deployed so
often is because the cognitive load associated with discovering and remediating
them is simply too high. The average developer can only allocate 10% to 20% of
their time remediating vulnerabilities. The rest of their time is spent either
writing new code or maintaining the application development environment used to
write that code. If organizations want more secure applications, they need to
find ways to make it easy for developers to correlate, prioritize and
contextualize the vulnerabilities as they are being identified. Most of the time
when developers are informed a vulnerability has been discovered in their code,
they have long since lost context. Vulnerabilities need to be immediately
identified at the time code is written, builds are created, and pull requests
are made – and identified in a way that is actionable. Otherwise, that
vulnerability is likely to be thrown atop the massive pile of technical debt
that developers hope they’ll one day have the time to address.
Quote for the day:
“Let no feeling of discouragement prey
upon you, and in the end you are sure to succeed.” --
Abraham Lincoln
No comments:
Post a Comment