The promise of collective superintelligence

The goal is not to replace human intellect, but to amplify it by connecting
  large groups of people into superintelligent systems that can solve problems
  no individual could solve on their own, while also ensuring that human values,
  morals and interests are inherent at every level. This might sound unnatural,
  but it’s a common step in the evolution of many social species. Biologists
  call the phenomenon Swarm Intelligence and it enables schools of
  fish, swarms of bees and flocks of birds to skillfully navigate their world
  without any individual being in charge. They don’t do this by taking votes or
  polls the way human groups make decisions. Instead, they form real-time
  interactive systems that push and pull on the decision-space and converge on
  optimized solutions. ... Can we enable conversational swarms in humans? It
  turns out, we can by using a concept developed in 2018
  called hyperswarms that divides real-time human groups into
  overlapping subgroups. ... Of course, enabling parallel groups is not enough
  to create a Swarm Intelligence. That’s because information needs to propagate
  across the population. This was solved using AI agents to emulate the function
  of the lateral line organ in fish.
There's Only One Way to Solve the Cybersecurity Skills Gap

The plain truth is that it's not just a numbers game. Many of these roles are
considered "hard to fill" because they are for specialist skill sets such as
forensic analysis, security architecture, interpreting malicious code, or
penetration testing. Or they're for senior roles with three to six years'
experience. Even if companies recruit people with high potential but not the
requisite background, it will take years for these recruits to upskill to reach
a sufficient standard. Moreover, if we throw open the gates completely, we risk
diluting the industry by introducing a whole swath of people with no technical
skills. Yes, soft skills are valuable and in short supply too, but relying on
these alone to fill the workforce gap does nothing to address the problem
businesses have: a lack of trained, competent cybersecurity professionals,
resulting, once again, in less resilience. Another major hurdle is that many
organizations are reluctant to invest in training because the job market is so
volatile. There's a fear that, by investing in new recruits, those staff members
will become a flight risk and put themselves back into that talent
pool. 
The Struggle for Microservice Integration Testing

Integration testing is crucial for microservices architectures. It validates the
interactions between different services and components, and you can’t
successfully run a large architecture of isolated microservices without
integration testing. In a microservices setup, each service is designed to
perform a specific function and often relies on other services to fulfill a
complete user request. While unit tests ensure that individual services function
as expected in isolation, they don’t test the system’s behavior when services
communicate with each other. Integration tests fill this gap by simulating
real-world scenarios where multiple services interact, helping to catch issues
like data inconsistencies, network latency and fault tolerance early in the
development cycle. Integration testing provides a safety net for CI/CD
pipelines. Without comprehensive integration tests, it’s easy for automated
deployments to introduce regressions that affect the system’s overall behavior.
By automating these tests, you can ensure that new code changes don’t disrupt
existing functionalities and that the system remains robust and scalable.
Google Cloud’s Cybersecurity Trends to Watch in 2024 Include Generative AI-Based Attacks
Threat actors will use generative AI and large language models in phishing and
other social engineering scams, Google Cloud predicted. Because generative AI
can create natural-sounding content, employees may struggle to identify scam
emails through poor grammar or spam calls through robotic-sounding voices.
Attackers could use generative AI to create fake news or fake content, Google
Cloudwarned. LLMs and generative AI “will be increasingly offered in underground
forums as a paid service, and used for various purposes such as phishing
campaigns and spreading disinformation,” Google Cloud wrote. On the other hand,
defenders can use generative AI in threat intelligence and data analysis.
Generative AI could allow defenders to take action at greater speeds and scales,
even when digesting very large amounts of data. “AI is already providing a
tremendous advantage for our cyber defenders, enabling them to improve
capabilities, reduce toil and better protect against threats,” said Phil
Venables, chief information security officer at Google Cloud, in an email to
TechRepublic.
OpenAI’s gen AI updates threaten the survival of many open source firms

The new API, according to OpenAI, is expected to provide new capabilities
  including a Code Interpreter, Retrieval Augmented Generation (RAG), and
  function calling to handle “heavy lifting” that would previously require
  developer expertise in order to build AI-driven applications. The Assistants
  API, specifically, may cause revenue losses for open source companies
  including LangChain, LLamaIndex, and ChromaDB, according to Andy Thurai,
  principal analyst at Constellation Research. “For organizations that want to
  standardize on OpenAI, the more their platform offers, the less organizations
  will need other frameworks such as Langchain and LlamaIndex. The new updates
  allow developers to create their applications within a single framework,” said
  David Menninger, executive director at Ventana Research. However, he pointed
  out that until the new features, such as the new API, are made generally
  available, enterprises will continue to put applications into production by
  relying on existing open source frameworks.
When net-zero goals meet harsh realities
There is a move towards greater precision and accountability at the
  non-governmental level, too. The principles of carbon emission measurement and
  reporting that underpin, for example, all corporate net-zero objectives tend
  to be agreed upon internationally by institutions such as the World Resources
  Institute and the World Business Council for Sustainable Development; in turn,
  these are used by bodies such as the SBTi and the CDP. Here too, standards are
  being rewritten, so that, for example, the use of carbon offsets is becoming
  less acceptable, forcing operators to buy carbon-free energy directly. With
  all these developments under way, there is a startling disconnect between many
  of the public commitments by countries and companies, and what most digital
  infrastructure organizations are currently doing or are able to do. ... The
  difference between the two surveys highlights a second disconnect. IBM’s
  findings, based on responses from senior IT and sustainability staff, show a
  much higher proportion of organizations collecting carbon emission data than
  Uptime’s.
CISOs Beware: SEC's SolarWinds Action Shows They're Scapegoating Us

The SEC had been trying to create accountability by holding a board
  accountable and liable for issues concerning cybersecurity incidents that
  inevitably occur from time to time. But now, in the case of SolarWinds, the
  SEC has turned around and directly gone after somebody who's only now the
  CISO. Brown wasn't the CISO when the breaches happened. He had been
  SolarWinds' VP of security and architecture and head of its information
  security group between July 2017 and December 2020, and he stepped into the
  role of CISO in January 2021. The result of the SEC's failure to mandate
  security leadership on corporate boards is that they've resorted to holding
  the CISO liable. This shift underscores a significant transformation in the
  CISO landscape. From my perspective as a CISO, it's increasingly clear that
  technical security expertise is an essential requirement for the role. Each
  day, CISOs are tasked with making critical decisions, such as approving or
  accepting timeline adjustments for security risks that have the potential to
  be exploited. 
Security in the impending age of quantum computers
The timeline for developing a cryptographically relevant quantum computer is
  highly contested, with estimates often ranging between 5 and 15 years.
  Although the date when such a quantum computer exists remains in the future,
  this does not mean this is a problem for future CIOs and IT professionals. The
  threat is live today due to the threat of “harvest now, decrypt later”
  attacks, whereby an adversary stores encrypted communications and data gleaned
  through classical cyberattacks and waits until a cryptographically relevant
  quantum computer is available to decrypt the information. To further highlight
  this threat, the encrypted data could be decrypted long before a
  cryptographically relevant quantum computer is available if the data is
  secured via weak encryption keys. While some data clearly loses its value in
  the short term, social security numbers, health and financial data, national
  security information, and intellectual property retain value for decades and
  the decryption of such data on a large scale could be catastrophic for
  governments and companies alike.
How the Online Safety Act will impact businesses beyond Big Tech

The requirements that apply to all regulated services, including those outside
  the special categories, are naturally the least onerous under the Act;
  however, because these still introduce new legal obligations, for many
  businesses these will require considering compliance through a new lens. ...
  Regulated services will have to conduct certain risk assessments at defined
  intervals. The type of risk assessments a service provider must conduct
  depends on the nature and users of the service.Illegal content assessment: all
  providers of regulated services must conduct a risk assessment of how likely
  users are to encounter and be harmed by illegal content, taking into account a
  range of factors including user base, design and functionalities of the
  service and its recommender systems, and the nature and severity of harm that
  individuals might suffer due to this content. ... all regulated services must
  carry out an assessment of whether the service is likely to be accessed by
  children, and if so they must carry out a children’s risk assessment of how
  likely children are to encounter and be harmed by content on the site, giving
  separate consideration to children in different age groups.
Enterprises vs. The Next-Generation of Hackers – Who’s Winning the AI Race?

Amidst a push for responsible AI development, major players in the space are
  on a mission to secure their tools from malicious use but bad actors have
  already started to take advantage of the same tech to boost their skill sets.
  Enterprises are increasingly finding new ways to integrate AI into internal
  workflows and external offerings, which in turn has created a new attack
  vector for hackers. This expanded surface has opened the door for a new wave
  of sophisticated attacks using advanced methods and unsuspecting entry points
  that enterprises previously didn’t have to secure against. ... Today’s threat
  landscape is transforming — hackers have tools at their fingertips that can
  rapidly advance their impact and an entirely new attack vector to explore.
  With growing enterprise use of AI offering an opportunity to expedite attacks,
  now is the time to focus on transforming security defenses. ... Despite
  scrutiny for its ability to equip cybercriminals with more advanced
  techniques, AI models can be used just as effectively among security and IT
  teams to mitigate these mounting threats. 
Quote for the day:
"Doing what you love is the
    cornerstone of having abundance in your life." -- Wayne Dyer
 
 
No comments:
Post a Comment