Quote for the day:
“The more you loose yourself in something bigger than yourself, the more energy you will have.” -- Norman Vincent Peale
Breaking the humanoid robot delusion
 The robot is called NEO. The company says NEO is the world’s first
consumer-ready humanoid robot for the home. It is designed to automate routine
chores and offer personal help so you can spend time on other things. ... Full
autonomy in perceiving, planning, and manipulating like a human is a massive
technology challenge. Robots have to be meticulously and painstakingly trained
on every single movement, learn to recognize every object, and “understand” —
for lack of a better word — how things move, how easily they break, what goes
where, and what constitute appropriate actions. One major way humanoid robots
are trained is with teleoperation. A person wearing special equipment remotely
controls prototype robots, training them for many hours on how to, say, fold a
shirt. Many hours more are required to train the robot how to fold a smaller
child’s shirt. Every variable, from the height of the folding table to the
flexibility of the fabrics has to be trained separately. ... The temptation to
use impressive videos of remotely controlled robots where you can’t see the
person controlling them to raise investment money, inspire stock purchases and
outright sell robot products, appears to be too strong to resist. Realistically,
the technology for a home robot that operates autonomously the way the NEO
appears to do in the videos in arbitrary homes under real-world conditions is
many years in the future, possibly decades.
The robot is called NEO. The company says NEO is the world’s first
consumer-ready humanoid robot for the home. It is designed to automate routine
chores and offer personal help so you can spend time on other things. ... Full
autonomy in perceiving, planning, and manipulating like a human is a massive
technology challenge. Robots have to be meticulously and painstakingly trained
on every single movement, learn to recognize every object, and “understand” —
for lack of a better word — how things move, how easily they break, what goes
where, and what constitute appropriate actions. One major way humanoid robots
are trained is with teleoperation. A person wearing special equipment remotely
controls prototype robots, training them for many hours on how to, say, fold a
shirt. Many hours more are required to train the robot how to fold a smaller
child’s shirt. Every variable, from the height of the folding table to the
flexibility of the fabrics has to be trained separately. ... The temptation to
use impressive videos of remotely controlled robots where you can’t see the
person controlling them to raise investment money, inspire stock purchases and
outright sell robot products, appears to be too strong to resist. Realistically,
the technology for a home robot that operates autonomously the way the NEO
appears to do in the videos in arbitrary homes under real-world conditions is
many years in the future, possibly decades.Your vendor’s AI is your risk: 4 clauses that could save you from hidden liability
 The frontier of exposure now extends to your partners’ and vendors’ use. The
  main question being: Are they embedding AI into their operations in ways you
  don’t see until something goes wrong? ... Require vendors to formally disclose
  where and how AI is used in their delivery of services. That includes the
  obvious tools and embedded functions in productivity suites, automated
  analytics and third-party plug-ins. ... Include explicit language that
  your data may not be used to train external models, incorporated into vendor
  offerings or shared with other clients. Require that all data handling comply
  with the strictest applicable privacy laws and specify that these obligations
  survive the termination of the contract. ... Human oversight ensures that
  automated outputs are interpreted in context, reviewed for bias and corrected
  when the system goes astray. Without it, organizations risk over-relying on
  AI’s efficiency while overlooking its blind spots. Regulatory frameworks are
  moving in the same direction: for example, high-risk AI systems must have
  documented human oversight mechanisms under the EU AI Act. ... Negotiate
  liability provisions that explicitly cover AI-driven issues, including
  discriminatory outputs, regulatory violations and errors in financial or
  operational recommendations. Avoid generic indemnity language. Instead,
  AI-specific liability should be made its own section in the contract, with
  remedies that scale to the potential impact.
  The frontier of exposure now extends to your partners’ and vendors’ use. The
  main question being: Are they embedding AI into their operations in ways you
  don’t see until something goes wrong? ... Require vendors to formally disclose
  where and how AI is used in their delivery of services. That includes the
  obvious tools and embedded functions in productivity suites, automated
  analytics and third-party plug-ins. ... Include explicit language that
  your data may not be used to train external models, incorporated into vendor
  offerings or shared with other clients. Require that all data handling comply
  with the strictest applicable privacy laws and specify that these obligations
  survive the termination of the contract. ... Human oversight ensures that
  automated outputs are interpreted in context, reviewed for bias and corrected
  when the system goes astray. Without it, organizations risk over-relying on
  AI’s efficiency while overlooking its blind spots. Regulatory frameworks are
  moving in the same direction: for example, high-risk AI systems must have
  documented human oversight mechanisms under the EU AI Act. ... Negotiate
  liability provisions that explicitly cover AI-driven issues, including
  discriminatory outputs, regulatory violations and errors in financial or
  operational recommendations. Avoid generic indemnity language. Instead,
  AI-specific liability should be made its own section in the contract, with
  remedies that scale to the potential impact.AI chatbots are sliding toward a privacy crisis
 The problem reaches beyond internal company systems. Research shows that some of
the most used AI platforms collect sensitive user data and share it with third
parties. Users have little visibility into how their information is stored or
reused, leaving them with limited control over its life cycle. This leads to an
important question about what happens to the information people share with
chatbots. ... One of the more worrying trends in business is the growing use of
shadow AI, where employees turn to unapproved tools to complete tasks faster.
These systems often operate without company supervision, allowing sensitive data
to slip into public platforms unnoticed. Most employees admit to sharing
information through these tools without approval, even as IT leaders point to
data leaks as the biggest risk. While security teams see shadow AI as a serious
problem, employees often view it as low risk or a price worth paying for
convenience. “We’re seeing an even riskier form of shadow AI,” says Tim Morris,
“where departments, unhappy with existing GenAI tools, start building their own
solutions using open-source models like DeepSeek.” ... Companies need to do a
better job of helping employees understand how to use AI tools safely. This
matters most for teams handling sensitive information, whether it’s medical data
or intellectual property. Any data leak can cause serious harm, from damaging a
company’s reputation to leading to costly fines.
The problem reaches beyond internal company systems. Research shows that some of
the most used AI platforms collect sensitive user data and share it with third
parties. Users have little visibility into how their information is stored or
reused, leaving them with limited control over its life cycle. This leads to an
important question about what happens to the information people share with
chatbots. ... One of the more worrying trends in business is the growing use of
shadow AI, where employees turn to unapproved tools to complete tasks faster.
These systems often operate without company supervision, allowing sensitive data
to slip into public platforms unnoticed. Most employees admit to sharing
information through these tools without approval, even as IT leaders point to
data leaks as the biggest risk. While security teams see shadow AI as a serious
problem, employees often view it as low risk or a price worth paying for
convenience. “We’re seeing an even riskier form of shadow AI,” says Tim Morris,
“where departments, unhappy with existing GenAI tools, start building their own
solutions using open-source models like DeepSeek.” ... Companies need to do a
better job of helping employees understand how to use AI tools safely. This
matters most for teams handling sensitive information, whether it’s medical data
or intellectual property. Any data leak can cause serious harm, from damaging a
company’s reputation to leading to costly fines.
The true cost of a cloud outage
 The top 2000 companies in the world pay approximately $400 billion for downtime
each year. A simple calculation reveals that these organizations, including the
Dutch companies ASML, Nationale Nederlanden, AkzoNobel, Philips, and Randstad,
lose around $200 million from their annual accounts due to unplanned downtime.
Incidentally, what the Splunk study really revealed were the hidden costs of
financial damage caused by problems with security tools, infrastructure, and
applications. These can wipe billions off market values. ... A more conservative
estimate of downtime costs can be found at Information Technology Intelligence
Consulting, which conducted research on behalf of Calyptix Security. The
majority of the parties surveyed had more than 200 employees, but the
combination was more diverse than the top 2000 companies worldwide. The costs of
downtime were substantial: at least $300,000 per hour for 90 percent of the
companies in question. Forty-one percent stated that IT outages cost between $1
million and $5 million. ... In theory, the largest companies can rely on a
multicloud strategy. In addition, hyperscalers absorb many local outages by
routing traffic to other regions. However, multicloud is not something that you
can just set up as a start-up SME. In addition, you usually do not build your
applications in a fully redundant form in different clouds. Furthermore, it is
quite possible that you can continue to work yourself, but that your product is
inaccessible.
The top 2000 companies in the world pay approximately $400 billion for downtime
each year. A simple calculation reveals that these organizations, including the
Dutch companies ASML, Nationale Nederlanden, AkzoNobel, Philips, and Randstad,
lose around $200 million from their annual accounts due to unplanned downtime.
Incidentally, what the Splunk study really revealed were the hidden costs of
financial damage caused by problems with security tools, infrastructure, and
applications. These can wipe billions off market values. ... A more conservative
estimate of downtime costs can be found at Information Technology Intelligence
Consulting, which conducted research on behalf of Calyptix Security. The
majority of the parties surveyed had more than 200 employees, but the
combination was more diverse than the top 2000 companies worldwide. The costs of
downtime were substantial: at least $300,000 per hour for 90 percent of the
companies in question. Forty-one percent stated that IT outages cost between $1
million and $5 million. ... In theory, the largest companies can rely on a
multicloud strategy. In addition, hyperscalers absorb many local outages by
routing traffic to other regions. However, multicloud is not something that you
can just set up as a start-up SME. In addition, you usually do not build your
applications in a fully redundant form in different clouds. Furthermore, it is
quite possible that you can continue to work yourself, but that your product is
inaccessible.
5 Reasons Why You’re Not Landing Leadership Roles
 Is your posture confident? Do you maintain steady eye contact? Is the cadence,
pace and volume of your voice engaging, assertive and compelling? Recruiters
assess numerous factors on the executive presence checklist. ... Are you showing
a grasp of the prospective employer’s pain points and demonstrating an original
point of view for how you will approach these problems? Treat senior level
interviews like consulting RFPs – you are an expert on their business,
uncovering potential opportunities with insightful questions, and sharing enough
of your expertise that you’re perceived as the solution. ... Title bumps are
rare, so you need to give the impression that you are already operating at the
C-level in order to be hired as such. Your interview examples should include
stories about how you initiated new ideas or processes, as well as measurable
results that impact the bottom line. Your examples should specify how many
people and dollars you have managed. Ideally, you have stories that show you can
get results in up and down markets. ... The hiring process extends over multiple
rounds, especially for leadership roles. Keep track of everyone that you have
met, as well as what you have specifically discussed with each of them. Send
personalized follow-up emails that engage each interviewer uniquely based on
what you discussed. This differentiates you as someone who listens and cares
about them specifically.
Is your posture confident? Do you maintain steady eye contact? Is the cadence,
pace and volume of your voice engaging, assertive and compelling? Recruiters
assess numerous factors on the executive presence checklist. ... Are you showing
a grasp of the prospective employer’s pain points and demonstrating an original
point of view for how you will approach these problems? Treat senior level
interviews like consulting RFPs – you are an expert on their business,
uncovering potential opportunities with insightful questions, and sharing enough
of your expertise that you’re perceived as the solution. ... Title bumps are
rare, so you need to give the impression that you are already operating at the
C-level in order to be hired as such. Your interview examples should include
stories about how you initiated new ideas or processes, as well as measurable
results that impact the bottom line. Your examples should specify how many
people and dollars you have managed. Ideally, you have stories that show you can
get results in up and down markets. ... The hiring process extends over multiple
rounds, especially for leadership roles. Keep track of everyone that you have
met, as well as what you have specifically discussed with each of them. Send
personalized follow-up emails that engage each interviewer uniquely based on
what you discussed. This differentiates you as someone who listens and cares
about them specifically.
Why understanding your cyber exposure is your first line of defence
 Thanks to AI, attacks are faster, more targeted and increasingly sophisticated.
As the lines between the physical and digital blur, the threat is no longer
isolated to governments or critical national infrastructure. Every organisation
is now at risk. Understanding your cyber exposure is the key to staying ahead.
This isn’t just a buzzword either; it’s about knowing where you stand and what’s
at risk. Knowing every asset, every connection, every potential weakness across
your digital ecosystem is now the first step in building a defence that can keep
pace with modern threats. But before you can manage your exposure, you need to
understand what’s driving it – and why the modern attack surface is so difficult
to defend. ... By consolidating data from across the environment and layering it
with contextual intelligence, cyber exposure management allows security teams to
move beyond passive monitoring. It’s not just about seeing more, it’s about
knowing what matters and acting on it. That means identifying risks earlier,
prioritising them more effectively and taking action before they escalate. ...
Effective and modern cybersecurity is shifting to shaping the battlefield before
threats even arrive. That’s down to the value of understanding your cyber
exposure. After all, it’s not just about knowing what’s in your environment,
it’s about knowing how it all fits together – what’s exposed, what’s critical
and where the next threat is likely to emerge.
Thanks to AI, attacks are faster, more targeted and increasingly sophisticated.
As the lines between the physical and digital blur, the threat is no longer
isolated to governments or critical national infrastructure. Every organisation
is now at risk. Understanding your cyber exposure is the key to staying ahead.
This isn’t just a buzzword either; it’s about knowing where you stand and what’s
at risk. Knowing every asset, every connection, every potential weakness across
your digital ecosystem is now the first step in building a defence that can keep
pace with modern threats. But before you can manage your exposure, you need to
understand what’s driving it – and why the modern attack surface is so difficult
to defend. ... By consolidating data from across the environment and layering it
with contextual intelligence, cyber exposure management allows security teams to
move beyond passive monitoring. It’s not just about seeing more, it’s about
knowing what matters and acting on it. That means identifying risks earlier,
prioritising them more effectively and taking action before they escalate. ...
Effective and modern cybersecurity is shifting to shaping the battlefield before
threats even arrive. That’s down to the value of understanding your cyber
exposure. After all, it’s not just about knowing what’s in your environment,
it’s about knowing how it all fits together – what’s exposed, what’s critical
and where the next threat is likely to emerge.
Applications and the afterlife: how businesses can manage software end of life
 Both enterprise software and personal applications have a lifecycle, set by the
vendor’s support and maintenance. Once an application or operating system goes
out of support, it will continue to run. But there will be no further feature
updates and vitally, often no security patches. ... When software end of life is
unexpected, it can cause serious disruption to business processes. In the very
worst-case scenarios, enterprises will only know there is a problem when a key
application no longer functions, or if a malicious actor exploits a
vulnerability. The problem for CIOs and CISOs is keeping track of the end of
life dates for applications across their entire stack, and understanding and
mapping dependencies between applications. This applies equally to in-house
applications, off the shelf software and open source. “End of life software is
not necessarily bad,” says Matt Middleton-Leal, general manager for EMEA at
Qualys. “It’s just not updated any more, and that can lead to vulnerabilities.
According to our research, nearly half of the issues on the CISA Known Exploited
Vulnerabilities (KEV) list are found in outdated and unsupported
software.” As CISA points out, attackers are most likely to exploit older
vulnerabilities, and to target unpatched systems. Risks come from old, and known
vulnerabilities, which IT teams should have patched.
Both enterprise software and personal applications have a lifecycle, set by the
vendor’s support and maintenance. Once an application or operating system goes
out of support, it will continue to run. But there will be no further feature
updates and vitally, often no security patches. ... When software end of life is
unexpected, it can cause serious disruption to business processes. In the very
worst-case scenarios, enterprises will only know there is a problem when a key
application no longer functions, or if a malicious actor exploits a
vulnerability. The problem for CIOs and CISOs is keeping track of the end of
life dates for applications across their entire stack, and understanding and
mapping dependencies between applications. This applies equally to in-house
applications, off the shelf software and open source. “End of life software is
not necessarily bad,” says Matt Middleton-Leal, general manager for EMEA at
Qualys. “It’s just not updated any more, and that can lead to vulnerabilities.
According to our research, nearly half of the issues on the CISA Known Exploited
Vulnerabilities (KEV) list are found in outdated and unsupported
software.” As CISA points out, attackers are most likely to exploit older
vulnerabilities, and to target unpatched systems. Risks come from old, and known
vulnerabilities, which IT teams should have patched.
Tips for CISOs switching between industries
 Building a transferable skill set is essential for those looking to switch
industries. For Dell’s first-ever CISO, Tim Youngblood, adaptability was never a
luxury but a requirement. His early years as a consultant at KPMG gave him a
front-row seat to the challenges of multiple industries before he ever moved
into cybersecurity. Those early years also taught Youngblood that while every
industry has its own nuances, the core security principles remain constant. ...
Making the jump into a new industry isn’t about matching past job titles but
about proving you can create impact in a new context. DiFranco says the key is
to demonstrate relevance early. “When I pitch a candidate, I explain what they
did, how they did it, and what their impact was to their organization in their
specific industry,” he says. “If what they did and how they did it, and what
their impact was on the organization resonates where that company wants to go,
they’re a lot more likely to say, ‘I don’t really care where this person comes
from because they did exactly what I want done in this organization’. ... The
biggest career risk for many CISOs isn’t burnout or data breach, it’s being seen
as a one-industry operator. Ashworth’s advice is to focus on demonstrating
transferable skills. “It’s a matter of getting whatever job you’re applying for,
to realise that those principles are the same, no matter what industry you’re
in. Whether it’s aerospace, healthcare, or finance, the principles are the same.
Show that, and you’ll avoid being pigeonholed.”
Building a transferable skill set is essential for those looking to switch
industries. For Dell’s first-ever CISO, Tim Youngblood, adaptability was never a
luxury but a requirement. His early years as a consultant at KPMG gave him a
front-row seat to the challenges of multiple industries before he ever moved
into cybersecurity. Those early years also taught Youngblood that while every
industry has its own nuances, the core security principles remain constant. ...
Making the jump into a new industry isn’t about matching past job titles but
about proving you can create impact in a new context. DiFranco says the key is
to demonstrate relevance early. “When I pitch a candidate, I explain what they
did, how they did it, and what their impact was to their organization in their
specific industry,” he says. “If what they did and how they did it, and what
their impact was on the organization resonates where that company wants to go,
they’re a lot more likely to say, ‘I don’t really care where this person comes
from because they did exactly what I want done in this organization’. ... The
biggest career risk for many CISOs isn’t burnout or data breach, it’s being seen
as a one-industry operator. Ashworth’s advice is to focus on demonstrating
transferable skills. “It’s a matter of getting whatever job you’re applying for,
to realise that those principles are the same, no matter what industry you’re
in. Whether it’s aerospace, healthcare, or finance, the principles are the same.
Show that, and you’ll avoid being pigeonholed.”
Awareness Is the New Armor: Why Humans Matter Most in Cyber Defense
 People remain the most unpredictable yet powerful variable in cybersecurity.
Lapses like permission misconfiguration, accidental credential exposure, or
careless data sharing continue to cause most incidents. Yet when equipped with
the right tools and timely information, individuals can become the strongest
line of defense. The challenge often stems from behavior rather than intent.
Employees frequently bypass security controls or use unapproved tools in pursuit
of productivity, unintentionally creating invisible vulnerabilities that go
unnoticed within traditional defences. Addressing this requires more than
restrictive policies. Security must be built into everyday workflows so that
safe practices become second nature. ... Since technology alone cannot secure an
organization, a culture of security-first thinking is essential. Leaders must
embed security into everyday workflows, promote upskilling, and focus on
reinforcement rather than punishment. This creates a workforce that takes
ownership of cybersecurity, checking email sources, verifying requests, and
maintaining vigilance in every interaction. Stay Safe Online is both a reminder
and a rallying cry. India’s digital economy presents immense opportunity, but
its threat surface expands just as fast.
People remain the most unpredictable yet powerful variable in cybersecurity.
Lapses like permission misconfiguration, accidental credential exposure, or
careless data sharing continue to cause most incidents. Yet when equipped with
the right tools and timely information, individuals can become the strongest
line of defense. The challenge often stems from behavior rather than intent.
Employees frequently bypass security controls or use unapproved tools in pursuit
of productivity, unintentionally creating invisible vulnerabilities that go
unnoticed within traditional defences. Addressing this requires more than
restrictive policies. Security must be built into everyday workflows so that
safe practices become second nature. ... Since technology alone cannot secure an
organization, a culture of security-first thinking is essential. Leaders must
embed security into everyday workflows, promote upskilling, and focus on
reinforcement rather than punishment. This creates a workforce that takes
ownership of cybersecurity, checking email sources, verifying requests, and
maintaining vigilance in every interaction. Stay Safe Online is both a reminder
and a rallying cry. India’s digital economy presents immense opportunity, but
its threat surface expands just as fast. 
Creepy AI Crawlers Are Turning the Internet into a Haunted House
 The degradation of the internet and market displacement caused by commercial AI
crawlers directly undermines people’s ability to access information online. This
happens in various ways. First, the AI crawlers put significant technical strain
on the internet, making it more difficult and expensive to access for human
users, as their activity increases the time needed to access websites. Second,
the LLMs trained on this scraped content now provide answers directly to user
queries, reducing the need to visit the original sources and cutting off the
traffic that once sustained content creators, including media outlets. ... AI
crawlers represent a fundamentally different economic and technical
proposition––a vampiric relationship rather than a symbiotic one. They harvest
content, news articles, blog posts, and open-source code without providing the
semi-reciprocal benefits that made traditional crawling sustainable. Little
traffic flows back to sources, especially when search engines like Google start
to provide AI generated summaries rather than sending traffic on to the websites
their summaries are based on. ... What makes this worse is that these actors
aren’t requesting books to read individual stories or conduct genuine research,
they’re extracting the entire collection to feed massive language model systems.
The library’s resources are being drained not to serve readers, but to build
commercial AI products that will never send anyone back to the library itself.
The degradation of the internet and market displacement caused by commercial AI
crawlers directly undermines people’s ability to access information online. This
happens in various ways. First, the AI crawlers put significant technical strain
on the internet, making it more difficult and expensive to access for human
users, as their activity increases the time needed to access websites. Second,
the LLMs trained on this scraped content now provide answers directly to user
queries, reducing the need to visit the original sources and cutting off the
traffic that once sustained content creators, including media outlets. ... AI
crawlers represent a fundamentally different economic and technical
proposition––a vampiric relationship rather than a symbiotic one. They harvest
content, news articles, blog posts, and open-source code without providing the
semi-reciprocal benefits that made traditional crawling sustainable. Little
traffic flows back to sources, especially when search engines like Google start
to provide AI generated summaries rather than sending traffic on to the websites
their summaries are based on. ... What makes this worse is that these actors
aren’t requesting books to read individual stories or conduct genuine research,
they’re extracting the entire collection to feed massive language model systems.
The library’s resources are being drained not to serve readers, but to build
commercial AI products that will never send anyone back to the library itself.
 
 
No comments:
Post a Comment