Why microservices might be finished as monoliths return with a vengeance

Migrating to a microservice architecture has been known to cause complex
  interactions between services, circular calls, data integrity issues and, to
  be honest, it is almost impossible to get rid of the monolith completely.
  Let’s discuss why some of these issues occur once migrated to the
  microservices architecture. ... When moving to a microservices architecture,
  each client needs to be updated to work with the new service APIs. However,
  because clients are so tied to the monolith’s business logic, this requires
  refactoring their logic during the migration. Untangling these dependencies
  without breaking existing functionality takes time. Some client updates are
  often delayed due to the work’s complexity, leaving some clients still using
  the monolith database after migration. To avoid this, engineers may create new
  data models in a new service but keep existing models in the monolith. When
  models are deeply linked, this leads to data and functions split between
  services, causing multiple inter-service calls and data integrity issues. ...
  Data migration is one of the most complex and risky elements of moving to
  microservices. It is essential to accurately and completely transfer all
  relevant data to the new microservices. 
InputSnatch – A Side-Channel Attack Allow Attackers Steal The Input Data From LLM Models

Researchers found that both prefix caching and semantic caching, which are
  used by many major LLM providers, can leak information about what users type
  in without them meaning to. Attackers can potentially reconstruct private user
  queries with alarming accuracy by measuring the response time. The lead
  researcher said, “Our work shows the security holes that come with improving
  performance. This shows how important it is to put privacy and security first
  along with improving LLM inference.” “We propose a novel timing-based
  side-channel attack to execute input theft in LLMs inference. The cache-based
  attack faces the challenge of constructing candidate inputs in a large search
  space to hit and steal cached user queries. To address these challenges, we
  propose two primary components.” “The input constructor uses machine learning
  and LLM-based methods to learn how words are related to each other, and it
  also has optimized search mechanisms for generalized input construction.” ...
  The research team emphasizes the need for LLM service providers and developers
  to reassess their caching strategies. They suggest implementing robust
  privacy-preserving techniques to mitigate the risks associated with
  timing-based side-channel attacks.
Ransomware Gangs Seek Pen Testers to Boost Quality

As cybercriminal groups grow, specialization is a necessity. In fact, as
  cybercriminal gangs grow, their business structures increasingly resemble a
  corporation, with full-time staff, software development groups, and finance
  teams. By creating more structure around roles, cybercriminals can boost
  economies of scale and increase profits. ... some groups required
  specialization in roles based on geographical need — one of the earliest forms
  of contract work for cybercriminals is for those who can physically move cash,
  a way to break the paper trail. "Of course, there's recruitment for roles
  across the entire attack life cycle," Maor says. "When you're talking about
  financial fraud, mule recruitment ... has always been a key part of the
  business, and of course, development of the software, of malware, and end of
  services." Cybercriminals' concerns over software security boil down to
  self-preservation. In the first half of 2024, law enforcement agencies in the
  US, Australia, and the UK — among other nations — arrested prominent members
  of several groups, including the ALPHV/BlackCat ransomware group and seized
  control of BreachForums. The FBI was able to offer a decryption tool for
  victims of the BlackCat group — another reason why ransomware groups want to
  shore up their security.
Forget All-Cloud or All-On-Prem: Embrace Hybrid for Agility and Cost Savings

Hybrid isn’t just about cutting costs — it boosts speed, security, and
  performance. Agile applications run faster in the cloud, where teams can
  quickly spin up, test, and launch without the limits of on-prem systems. This
  agility becomes especially valuable when delivering software quickly to meet
  market demands without compromising the core stability of the entire system.
  Security and compliance are also critical drivers of hybrid adoption.
  Regulatory mandates often require data to remain on-premises to ensure
  compliance with local data residency laws. Hybrid infrastructure allows
  companies to move customer-facing applications to the cloud while keeping
  sensitive data on-prem. This separation of data from the front-end layers has
  become common in sectors like finance and government, where compliance demands
  and data security are non-negotiable. I have been speaking regularly to the
  CTOs of two very large banks in the US. They currently manage 15-20% of their
  workloads in the cloud and estimate the most they will ever have in the cloud
  would be 40-50%. They tell me the rest will stay on-prem — always — so they
  will always need to manage a hybrid environment.
Minimizing Attack Surface in the Cloud Environment

The increased dependence and popularity of the cloud environment expands the
  attack surface. These are the potential entry points, including network
  devices, applications, and services that attackers can exploit to infiltrate
  the cloud and access systems and sensitive data. ... Cloud services rely upon
  APIs for seamless integration with third-party applications or services. As
  the number of APIs increases, they expand the attack surface for attackers to
  exploit. Hackers can easily target insecure or poorly designed APIs that lack
  encryption or robust authentication mechanisms and access data resources,
  leading to data leaks and account takeover. ... The device or application not
  approved or supported by the IT team is called shadow IT. Since many of these
  devices and apps do not undergo the same security controls as the corporate
  ones, they become more vulnerable to hacking, putting the data stored within
  them at risk of manipulation. ... Unaddressed security gaps or errors threaten
  the cloud assets and data. Attackers can exploit misconfiguration and
  vulnerabilities in the cloud-hosted services, resulting in data breaches and
  other cyber attacks.
AI & structured cabling: Are they such unusual bedfellows?
The key word here is “structured” (its synonyms include organized, precise and
  efficient). When “structured” precedes the word “cabling,” it immediately
  points to a standardized way to design and install a cabling system that will
  be compliant to international standards, whilst providing a flexible and
  future-ready approach capable of supporting multiple generations of AI
  hardware. Typically, an AI data center’s structured cabling will be used to
  connect pieces of IT hardware together using high-performance, ultra-low loss
  optical fiber and Cat6A copper. ... What do we know about AI? Network speeds
  are constantly changing, and it feels like it’s happening on a daily basis.
  400G and 800G are a reality today, with 1.6T coming soon. Just a few years
  ago, who would have believed that it was possible? Structured cabling offers
  the type of scalability and flexibility needed to accommodate these speed
  changes and the future growth of AI networks. ... Data centers are the
  “factory floor” of AI operations, and as AI continues to impact all areas of
  our lives, it will become increasingly integrated into emerging technologies
  like 5G, IoT, and Edge computing. This trend will only further emphasize the
  need for robust and scalable high-speed cabling systems.
Business Automation: Merging Technology and Skills

As technology progresses, business owners are eager for solutions that can
  handle repetitive tasks, freeing up time for their teams to focus on more
  strategic activities. One of the most effective strategies to achieve this is
  through business automation—a combination of technology and human skills that
  streamlines processes and boosts productivity. Business automation is designed
  to complement rather than replace human efforts. It helps teams reduce
  repetitive tasks, allowing them to concentrate on what matters most, such as
  improving customer satisfaction and driving innovation. By implementing
  automation, companies can increase productivity as routine jobs—like data
  entry and scheduling—are managed by automated systems. This shift not only
  saves time but also minimises errors associated with manual processes.
  Automation also enables better resource allocation. The insights gained from
  automated tools empower teams to make informed decisions and direct resources
  where they are needed most. Furthermore, real-time reporting offers valuable
  data that supports timely decision-making. Effective team management is
  crucial for any business, and automation can enhance productivity and
  accountability. 
Scaffolding for the South Africa National AI Policy Framework

The lack of specific responsibility assignment and cross-sectoral coordination
  mechanisms undermines the framework’s utility in guiding downstream activity.
  It is not too early to start articulating appropriate institutional
  arrangements, or encouraging debates between different models. A proposed
  multi-stakeholder platform to guide implementation lacks details about
  representation, participation criteria, and decision-making processes. This
  institutional uncertainty is further complicated by strained budgets and
  unclear funding mechanisms for new structures. Next, the framework’s lack of
  integration with existing policy landscapes is inadequate. There is a value in
  horizontal policy coherence across trade, competition, and other sectors.
  Reference to South Africa’s developmental policy course as articulated in the
  various Medium-Term Strategic Frameworks and in the National Development Plan
  2030 would be helpful. There is a focus on transformation, development, and
  capacity-building, strengthening the intentions set out in the 2019 White
  Paper on Science, Technology and Innovation, which emphasizes ICT's role in
  further developmental goals within a socio-economic context that features high
  unemployment rates.
The DevSecOps Mindset: What It Is and Why You Need It

Navigating the delicate balance between speed and security is challenging for
  all organizations. That’s why so many are converting to the DevSecOps mindset.
  That said, it is not all smooth rolling when approaching the transition. Below
  are a few common factors that stand in the way of the security-first
  approach:Cultural Resistance: Teams may resist integrating security into
  fast-moving DevOps pipelines due to the extra initiative that individuals must
  take. Lack of Security Expertise: Many developers lack the deep security
  knowledge required to identify vulnerabilities early on due to the fast pace
  of technological innovations and creative threat actors. Limited Resources for
  Automation: Smaller organizations may struggle with the cost of automation
  tools. While DevSecOps incorporation might face a few hurdles, building a
  culture with regular security and automation brings many advantages that
  outweigh them. To name a few:Reduced Security Risks: By addressing security
  from the beginning, vulnerabilities get identified and resolved before they
  reach production. Organizations using DevSecOps practices experience a 50%
  reduction in security vulnerabilities compared to those that follow
  traditional development processes.
Talent in the new normal: How to manage fast-changing tech roles

The new workplace is one where automation and AI will be front and center.
  This has caught the imagination of today’s CIOs looking to move faster and
  scale. There’s no part of the business that can’t be automated. But how can
  the CIO build the culture, skills, and mindset to align with this new era of
  work, while also fostering growth? It will require CIOs to think differently.
  What might have worked five years ago will not cut it today. A good culture is
  key to an organization running effectively. This is why many of the biggest
  tech companies invest so heavily in making their offices a nice place to be.
  Culture is one of the intangible factors that make or break a professional’s
  happiness – and, by extension, their ability to work well. The CIO’s role in
  managing the organization’s growth is critical. CIOs understand how teams
  operate and, as a result, are well-placed to support their organization’s
  hiring and onboarding processes. Here, it’s not just about finding talent with
  the right skills, but also ensuring they meet the cultural needs of the
  organization. At a time when skills shortages are still a major challenge,
  what digital leaders should be looking for are candidates with an open mind
  and a desire to learn and grow. 
Quote for the day:
"Small daily imporevement over time
    lead to stunning results." -- Robin Sherman
 
 
No comments:
Post a Comment