7 mistakes to avoid when developing RPAs
“The biggest mistake when using RPA is to fall into the trap of thinking it
can automate processes, and in reality, RPA is more accurately robotic task
automation (RTA),” says Aali Qureshi, SVP of Sales for the Americas at
Kissflow. “RPA bots are great for automating individual, repetitive vertical
tasks, but if you want to create and automate more complex horizontal
processes that span an entire enterprise, you need a low-code or no-code
automation tool that allows you to automate tasks and processes in order to
skip hand-coding.” ... It’s not only exceptions that can be problematic,
especially when deploying bots to support critical business processes. The
next mistake to avoid is deploying bots to production without data validation,
error detection, monitoring, and alerting. “RPA is relatively easy as long as
one can assume it works correctly, or if it doesn’t, no damage will be done.
But malfunctioning RPA can make a huge number of errors in a very short time,”
says Hannula. One best practice is centralizing bot monitoring and alerting
with the devops or IT ops teams responsible for monitoring applications and
infrastructure.
How to ask the board and C-suite for security funding
Risk acceptance is the board's prerogative. So, Budiharto advises CISOs to
calculate and communicate the cost of not implementing the solution, including
the likelihood of a breach or exposure, and the full financial impact of such
a breach or exposure (from direct losses to cleanup costs) should the funding
request be denied. "To the CFO, those savings should far outweigh the TCO of
implementing and managing the solution," she adds. Putting it all together,
she describes a scenario where a new solution needs to be added to the
existing EDR to stop ransomware in its tracks, kill it, and remediate it
faster and more thoroughly than their existing EDR does. "The board will ask,
'How is that related to the bottom line?' So, I calculate the loss of revenue
in productivity and loss of business and multiply that by the average days of
trying to resolve a ransomware attack under the current EDR system," Budiharto
explains. "These types of comparisons will help the board see the big picture,
including how your solution will help avoid that big expense."
Gartner: CIOs must prepare for generative AI disruption
Beyond business leaders, Gartner noted that governments also have put in place
a strong commitment to AI and are prioritising strategies and plans that
recognise AI as a key technology in both private and public sectors. This
includes incorporating AI into long-term national planning, which is being
reinforced through the implementation of corresponding acts and regulations to
bolster AI initiatives. “Implementation at a national level will solidify AI
as a catalyst for enhancing productivity to boost the digital economy,” said
Plummer. “Successful implementation of large-scale AI initiatives necessitates
the support and collaboration of diverse stakeholders, showcasing the
mobilisation and convening ability of national resources.” Among the key
application areas for CIOs and IT leaders is the ability for generative AI to
help IT departments manage older systems. According to Gartner, generative AI
tools will be used to explain legacy business applications and create
appropriate replacements, reducing modernisation costs by 70%, by 2027.
CIOs assess generative AI's risk and reward for software engineers
While most CIOs are choosing to keep generative AI tools away from production
environments, it might not be long before IT professionals start using
generative AI for disparate elements of the software development and
engineering process. "The main message I have is to get your staff up to date
and put the resources into training, and then take advantage of it," she says.
"It's incredible what you can do with code generation now. I could build an
entire application without knowing any JavaScript or how to code. But you must
be educated on all the pluses and the minuses -- and that doesn't happen
overnight." That's a sentiment that resonates with Omer Grossman, global CIO
at CyberArk. In an interview with ZDNET, he suggests now is the time to start
exploring generative AI. "Leaders should make decisions," he says. "And I'm
emphasizing that point because if you don't make any decisions because you are
risk-averse, you risk missing out." For business leaders who are thinking
about how to use generative AI in areas such as software development and
engineering, Grossman suggests a range of steps.
Closing ‘AI confidence gap’ key to powering its benefits for society and planet
The research by BSI, the UK-headquartered business improvement and standards
company, was commissioned to launch the Shaping Society 5.0 essay collection,
which explores how AI innovations can be an enable that accelerates
progress. It highlights the importance of building greater trust in the
technology, as many expect AI to be commonplace by 2030, for example,
automated lighting at home (41%), automated vehicles (45%) or biometric
identification for travel (40%). A little over a quarter (26%) expect AI to be
regularly used in schools within just seven years. Interestingly, three-fifths
of the respondents globally (61%) want international guidelines to enable
the safe use of AI, indicating the importance of guardrails to ensure AI’s
safe and ethical use and build trust. For example, safeguards on the
ethical use of patient data in healthcare are important to 55% of the
respondents of the survey globally. Engagement with AI is markedly higher in
two of the fastest-growing economies 1. China (70%) and India (64%)
already use AI every day at work.
Exponential Thinking: The Secret Sauce Of Digital Transformation
The first crucial step in embracing exponential thinking is to reframe your
relationship with fear and failure. We often view challenges or setbacks as
threats, paralyzing us into inaction. Instead, reframe your fears as
opportunities for learning and growth. When faced with a challenge, ask
yourself questions like, "What can I learn from this?" or "How can this
experience help me grow?" This shift in perspective will make you more
resilient and open to new experiences, which is the core foundation for
exponential thinking. ... Exponential thinking, which leads to exponential
growth, rarely happens in isolation; it's a team effort. Make it a point to
regularly interact with people outside your immediate team and field of
expertise; connect with folks from different departments and even different
fields. Whether it's through inter-departmental meetings, cross-functional
projects or internal hackathons, the fusion of different perspectives can
ignite innovative solutions with exponential potential. In a world aiming for
exponential success, an organizational culture that champions team
collaboration across all departments is not just beneficial—it's
imperative.
Hackers Hit Secure File Transfer Software Again and Again
Vulnerabilities continue to surface in file transfer tools. In May, Australian
cybersecurity firm Assetnote alerted Citrix to a critical vulnerability in the
ShareFile storage zones controller, or SZC, in its cloud-based secure
file-sharing and storage service known as Citrix Content Collaboration. Citrix
patched the flaw on May 11, notified customers directly about the
vulnerability and helped them lock it down. Citrix also blocked unpatched
hosts from connecting to its cloud component, thus limiting any hacking impact
to a customer's own environment. The U.S. Cybersecurity and Infrastructure
Security Agency warned in August that the Citrix ShareFile vulnerability was
being actively exploited by attackers. ... Security experts have warned users
of secure file transfer software to safeguard themselves, given the risk of
more such attacks perpetrated by Clop or copycats. One challenge with Clop's
attacks is that the group has somehow continued to obtain access to zero-day
vulnerabilities in the products, meaning even fully patched software could be
- and was - exploited.
How Do We Manage AI Hallucinations?
The analogy between fictitious responses produced by a machine and sensory
phenomena in humans is clear: Both produce information that is not grounded in
reality. Just as humans experiencing hallucinations may see vivid, realistic
images or hear sounds reminiscent of real auditory phenomena, LLMs may produce
information in their “minds” that appear real but is not. ... While the
ultimate causes of AI hallucinations remain somewhat unclear, a number of
potential explanations have emerged. These phenomena are often related to
inadequate data provision during design and testing. If a limited amount of
data is fed into the model at the outset, it will rely on that data to
generate future output, even if the query is reliant on an understanding of a
different type of data. This is known as overfitting, where the model is
highly tuned to a certain type of data but incapable of adapting to new types
of data. The generalizations learned by the model may be highly effective for
the original data sets but not applicable to unrelated data sets.
When your cloud project is over budget
This is likely your fault since you did not plan well and missed many things
that became unexpected costs or delays. Also, there are known budget issues
around migrating or developing new systems and how much they cost to operate
after being deployed. We’re talking about both. Not everyone is an excellent
planner, but there is a discipline to project management, including metrics
and estimation approaches, that most IT projects choose to ignore. They
provide a rough estimate of how long and how much money should be needed to do
something meaningful in the cloud. Ignoring these guidelines is never good, so
let’s learn from our mistakes and improve project planning. ... Engage in
proactive communication with your cloud service providers to discuss your
situation and explore any potential options for cost reduction. Yes, this
means begging for a discount. Providers may offer flexible pricing plans,
reserved instances, or cost optimization guides since it’s their system. Also,
this may mean that you have to agree to future commitments for cloud service
usage that may be out of this budget period. This could be an ethical or
policy no-no at your company, so check with the CFO.
Bracing for AI-enabled ransomware and cyber extortion attacks
In addition to state-sponsored attacks by APTs, governments must deal with
their fair share of criminal activity as well, particularly at lower levels of
government where cybersecurity resources are especially scarce. This includes
attacks against police departments, public schools, healthcare systems, and
others. These attacks ramped up in 2023, a trend we expect to continue as
cybercriminals look to easy targets from which to steal sensitive data like
PII. Ransomware groups’ success is often less about technological
sophistication and more about their ability to exploit the human element in
cyber defenses. Unfortunately, this is exactly the area where we can expect AI
to be of the greatest use to criminal gangs. Chatbots will continue to remove
language barriers to crafting believable social engineering attacks, learn to
communicate believably, and even lie to get what they want. As developers
release ethically dubious and amoral large language models in the name of free
speech and other justifications, these models will also be used to craft novel
threats.
Quote for the day:
"Many of life’s failures are people who did not realize how close they were
to success when they gave up." -- Thomas Edison
No comments:
Post a Comment