Showing posts with label data mesh. Show all posts
Showing posts with label data mesh. Show all posts

Daily Tech Digest - November 04, 2025


Quote for the day:

"Listen with curiosity, speak with honesty act with integrity." -- Roy T Bennett



What does aligning security to the business really mean?

“Alignment to me means that information security supports the strategy of the organization,” says Sattler, who also serves as a board director with the governance association ISACA. ... “It’s not enough to say it; you actually have to do it,” she explains. “There is a contingent of cybersecurity that sees itself as an island, implementing defense in depth in every corner of the organization, adopting all these frameworks and standards, but there is diminishing returns in doing that. So instead of saying, ‘This is our cybersecurity discipline and we’re doing all these things because the benchmarks tell us to,’ CISOs have to align their efforts to their organization’s business model.” ... To align, she says, security leaders must “know the objectives the business has and use those to shape strategy, whether it’s cost containment, going into new markets, adopting cloud. The playbook starts from understanding the organizational priorities and then layering in what threat actors are doing in that industry and what could go wrong, what is the risk we can live with, and understanding and articulating the business impact of security incidents.” ... “When security is not aligned, security is reacting to changes rather than shaping changes,” says Matt Gorham. “But when security isn’t chasing the business it’s because it’s at the table from the beginning and is saying, ‘Here’s how I can help the business grow and grow securely.’”


CISO Burnout – Epidemic, Endemic, or Simply Inevitable?

“Burnout and PTSD are different conditions, though they can coexist and share some symptoms,” says Ventura. “The constant hypervigilance required in our roles can mirror PTSD symptoms, and some cyber security professionals do experience what could be considered secondary trauma from constantly dealing with the aftermath of cyber-attacks.” Experiencing trauma can make you more susceptible to burnout, and burnout can exacerbate existing trauma responses. “Both conditions are serious and treatable, but they require different approaches,” she suggests. And both are further complicated by neurodivergence, a characteristic that is particularly prevalent in cybersecurity, and especially among CISOs. ... “From my experience working with senior cyber security leaders,” she continues, “burnout also affects their ability to lead their teams effectively. They become less empathetic, more prone to micromanaging, and, ironically, more likely to create the very conditions that lead to burnout in their staff. The strategic thinking that makes a great CISO (the ability to see the big picture, anticipate threats, and balance risk with business needs) gets clouded by exhaustion and cynicism. Perhaps most dangerously, burned-out CISOs often develop tunnel vision, focusing obsessively on certain threats while missing others entirely. When the person responsible for an organization’s entire security posture is running on empty, everyone is at risk.”


Uncovering the risks of unmanaged identities

Unmanaged AI agents often operate independently, making it difficult to track and monitor their activities without a centralized management system. These agents can adapt and change their behavior autonomously, which complicates efforts to predict and control their actions. While performing their duties, AI agents can even spin up other models and agents that have access to valuable data. ... Unmanaged identities significantly expand the attack surface, providing more entry points for attackers. They are prime targets for credential theft, which can lead to lateral movement within an organization’s network. Forgotten or over-permissioned accounts can facilitate privilege escalation, allowing attackers to gain unauthorized access to sensitive data. Real-world breaches have been linked to unmanaged identities, underscoring the critical need for effective identity management. ... Inefficient access management due to unmanaged identities increases IT overhead and complexity. Unauthorized access or accidental deletions can disrupt business operations, leading to breaches, financial losses, and diminished customer trust. ... Unmanaged identities present a clear and present danger to organizations. They increase the risk of security breaches, compliance failures, and operational disruptions. It is imperative for organizations to prioritize identity discovery and management as a core security practice.


Empowering Teams: Decentralizing Architectural Decision-Making

Decisions form the core of software architecture, and practicing software architecture means working with decisions. Software development itself represents a constant stream of decisions. In a decentralized decision-making process, everyone contributes to architectural decisions, from developers to architects. For this approach, identifying whether a decision is architecturally significant and will impact the system now or in the future matters more than who made the decision or how long it took. Recording architectural decisions captures the why behind every what, creating valuable context for future learning and shared understanding. ... Timing for seeking feedback or advice depends on the nature of the decision. For impactful decisions affecting multiple system parts, or when lacking business or technical knowledge, seeking advice during the decision-making process yields better results. ADRs are immutable documents; once marked as adopted, they cannot be changed. If a decision needs revision, the previous ADR is superseded and a new one created. ... From the program leadership perspective, watching teams make independent decisions felt like being the first test driver in a Tesla using autopilot and hoping to avoid crashing. Staying out of decisions required conscious effort to avoid undermining the advice process and resorting back to make the decisions for the team.


The Fractured Cloud: How CIOs Can Navigate Geopolitical and Regulatory Complexity

Initially, cloud environments were largely interchangeable from a governance, compliance, and security perspective. It didn't really matter exactly which cloud data center hosted an organization's workloads, or which jurisdiction the data center was located in. IT leaders had the luxury of choosing cloud platforms and regions based primarily on factors such as pricing and latency, without having to consider geopolitics or the global regulatory environment. Fast forward to the present, however, and planning a cloud architecture -- let alone evolving an existing cloud strategy in response to changing needs -- has become much more complex. ... During the past decade or so, a host of regulations have emerged that apply to specific jurisdictions, including the GDPR and California Public Records Act (CPRA). Regulations dealing with AI, which are just now coming online, are likely to add even more diversity as different states or countries introduce varying laws. ... A related issue is the increasing pressure organizations face surrounding data localization, which refers to the practice of keeping data within a certain country or jurisdiction. Regulations require this in some cases. Even if they don't, businesses may voluntarily choose to ensure data localization for the purposes of improving workload performance, or to assure customers that their data never leaves their home region.


Let's Get Physical: A New Convergence for Electrical Grid Security

Power plants and transmission/distribution system operators (TSOs and DSOs) have long focused on maintaining uptime and enhancing the resilience of their services; keeping the lights on is always the goal. That's especially true as the past few years have seen the rise of OT/OT convergence, wherein formerly siloed equipment that runs physical processes for critical infrastructure (operational technology, or OT) has been hooked up to the IT network and the Internet in some cases, exposing it to more cyberthreats. Now, another type of convergence been forcing a new conversation. ... In this new world, both industry regulators and analysts, like those at Black & Veatch, are arguing the same point: that where once keeping the lights on might have just meant maintaining equipment and avoiding fallen trees, today's grid operators need a robust, integrated physical and cybersecurity strategy to maintain continuous service.  ... an IT operation might primarily concern itself with firewalls, or network monitoring; but "in many cases, cyberattacks can often involve physical access to sites, whether by malicious insiders or unwitting employees and contractors. Understanding who is present on-site, when and why, is critical to investigating and mitigating attacks on operations," Bramson explains.


Was data mesh just a fad?

Data mesh architecture promised to solve these problems. A polar opposite approach from a data lake, a data mesh gives the source team ownership of the data and the responsibility to distribute the dataset. Other teams access the data from the source system directly, rather than from a centralized data lake. The data mesh was designed to be everything that the data lake system wasn’t. ... But the excitement around data mesh didn’t last. Many users became frustrated. Beneath the surface, almost every bottleneck between data providers and data consumers became an implementation challenge. The thing is, the data mesh approach isn’t a once-and-done change, but a long-term commitment to prepare a data schema in a certain way. Although every source team owns their dataset, they must maintain a schema that allows downstream systems to read the data, rather than replicating it. ... No, data mesh is not a fad, nor is it the next big thing that will solve all of your data challenges. But data mesh can dramatically reduce data management overhead, and at the same time improve data quality, for many companies. In essence, data mesh is a shift in mindset, one that completely changes the way you view data. Teams must envision data as a product, continuously showing commitment for the source team to own the data set and discouraging duplication. 


8 ways to make responsible AI part of your company's DNA

"Responsible AI is a team sport," the report's authors explain. "Clear roles and tight hand-offs are now essential to scale safely and confidently as AI adoption accelerates." To leverage the advantages of responsible AI, PwC recommends rolling out AI applications within an operating structure with three "lines of defense." First line: Builds and operates responsibly. Second line: Reviews and governs. Third line: Assures and audits. ... "For tech leaders and managers, making sure AI is responsible starts with how it's built," Rohan Sen, principal for cyber, data, and tech risk with PwC US. "To build trust and scale AI safely, focus on embedding responsible AI into every stage of the AI development lifecycle, and involve key functions like cyber, data governance, privacy, and regulatory compliance," said Sen. ... "Start with a value statement around ethical use," said Logan. "From here, prioritize periodic audits and consider a steering committee that spans privacy, security, legal, IT, and procurement. Ongoing transparency and open communication are paramount so users know what's approved, what's pending, and what's prohibited. Additionally, investing in training can help reinforce compliance and ethical usage." ... Make it a priority to "continually discuss how to responsibly use AI to increase value for clients while ensuring that both data security and IP concerns are addressed," said Tony Morgan, senior engineer at Priority Designs.


Context Engineering: The Next Frontier in AI-Driven DevOps

Context Engineering represents a significant evolution from the early days of prompt engineering, which focused on crafting the perfect, isolated instruction for an AI model. Context engineering, in contrast, is about orchestrating the entire information ecosystem around the AI. It’s the difference between giving someone a map (prompt engineering) and providing them with a real-time GPS that has traffic updates, road closures, and understands your personal driving preferences. ... The core components of context engineering in a DevOps environment include: Dynamic Information Assembly: Aggregating data from a multitude of DevOps tools, including monitoring platforms, CI/CD pipelines, and infrastructure as code (IaC) repositories. Multi-Source Integration: Connecting to APIs, databases, and internal documentation to create a comprehensive view of the entire system. Temporal Awareness: Understanding the history of changes, incidents, and performance to identify patterns and predict future outcomes. ... In a traditional setup, the CI/CD pipeline would run a standard set of tests. But with context engineering, a context-aware AI agent analyzes the change. It recognizes the high-risk nature of the code, cross-references it with a recent security audit that flagged a related library, and automatically triggers an extended security testing suite. It also notifies the security team for a priority review. This is a far cry from the old days of one-size-fits-all pipelines.


Drowning in Data? Here’s Why You Need to Ditch the Rowboat for an Aircraft Carrier

In an effort to stay afloat, many enterprises are trying to patch their systems with incremental upgrades. They add more cloud instances. They layer on external tools. They spin up new teams to manage increasingly fragmented stacks. But scaling up a fragile system doesn’t make it strong. It just makes the cracks bigger. ... The deeper issue is this: the dominant architecture most enterprises still rely on was designed over a decade ago. It served a world where workloads operated in gigabytes or single-digit terabytes. Today, companies are navigating hundreds of petabytes, yet many are still using infrastructure built for a far smaller scale. It’s no wonder the systems are buckling under the weight. ... As organizations reevaluate their data architectures, several priorities are coming into sharper focus: Reducing fragmentation by moving toward more unified environments, where systems work in concert rather than in silos. Improving performance and cost-efficiency not just through hardware, but through smarter architecture and workload optimization. Lowering latency for high-demand workloads like geospatial, AI, and real-time analytics, where speed directly impacts decision-making. Managing the energy consumption bottleneck in ways that align with both financial and sustainability goals. Ultimately, this shift is about enabling teams to go from playing defense (maintaining systems and containing cost) to playing offense with faster, more actionable insights.

Daily Tech Digest - June 15, 2025


Quote for the day:

“Whenever you find yourself on the side of the majority, it is time to pause and reflect.” -- Mark Twain



Gazing into the future of eye contact

Eye contact is a human need. But it also offers big business benefits. Brain scans show that eye contact activates parts of the brain linked to reading others’ feelings and intentions, including the fusiform gyrus, medial prefrontal cortex, and amygdala. These brain regions help people figure out what others are thinking or feeling, which we all need for trusting business and work relationships. ... If you look into the camera to simulate eye contact, you can’t see the other person’s face or reactions. This means both people always appear to be looking away, even if they are trying to pay attention. It is not just awkward — it changes how people feel and behave. ... The iContact Camera Pro is a 4K webcam that uses a retractable arm that places the camera right in your line of sight so that you can look at the person and the camera at the same time. It lets you adjust video and audio settings in real time. It’s compact and folds away when not in use. It’s also easy to set up with a USB-C connection and works with Zoom, Microsoft Teams, Google Meet, and other major platforms. ... Finally, there’s Casablanca AI, software that fixes your gaze in real time during video calls, so it looks like you’re making eye contact even when you’re not. It works by using AI and GAN technology to adjust both your eyes and head angle, keeping your facial expressions and gestures natural, according to the company.


New York passes a bill to prevent AI-fueled disasters

“The window to put in place guardrails is rapidly shrinking given how fast this technology is evolving,” said Senator Gounardes. “The people that know [AI] the best say that these risks are incredibly likely […] That’s alarming.” The RAISE Act is now headed for New York Governor Kathy Hochul’s desk, where she could either sign the bill into law, send it back for amendments, or veto it altogether. If signed into law, New York’s AI safety bill would require the world’s largest AI labs to publish thorough safety and security reports on their frontier AI models. The bill also requires AI labs to report safety incidents, such as concerning AI model behavior or bad actors stealing an AI model, should they happen. If tech companies fail to live up to these standards, the RAISE Act empowers New York’s attorney general to bring civil penalties of up to $30 million. The RAISE Act aims to narrowly regulate the world’s largest companies — whether they’re based in California (like OpenAI and Google) or China (like DeepSeek and Alibaba). The bill’s transparency requirements apply to companies whose AI models were trained using more than $100 million in computing resources (seemingly, more than any AI model available today), and are being made available to New York residents.


The ZTNA Blind Spot: Why Unmanaged Devices Threaten Your Hybrid Workforce

The risks are well-documented and growing. But many of the traditional approaches to securing these endpoints fall short—adding complexity without truly mitigating the threat. It’s time to rethink how we extend Zero Trust to every user, regardless of who owns the device they use. ... The challenge of unmanaged endpoints is no longer theoretical. In the modern enterprise, consultants, contractors, and partners are integral to getting work done—and they often need immediate access to internal systems and sensitive data. BYOD scenarios are equally common. Executives check dashboards from personal tablets, marketers access cloud apps from home desktops, and employees work on personal laptops while traveling. In each case, IT has little to no visibility or control over the device’s security posture. ... To truly solve the BYOD and contractor problem, enterprises need a comprehensive ZTNA solution that applies to all users and all devices under a single policy framework. The foundation of this approach is simple: trust no one, verify everything, and enforce policies consistently. ... The shift to hybrid work is permanent. That means BYOD and third-party access are not exceptions—they’re standard operating procedures. It’s time for enterprises to stop treating unmanaged devices as an edge case and start securing them as part of a unified Zero Trust strategy.


3 reasons I'll never trust an SSD for long-term data storage

SSDs rely on NAND flash memory, which inevitably wears out after a finite number of write cycles. Every time you write data to an SSD and erase it, you use up one write cycle. Most manufacturers specify the write endurance for their SSDs, which is usually in terabytes written (TBW). ... When I first started using SSDs, I was under the impression that I could just leave them on the shelf for a few years and access all my data whenever I wanted. But unfortunately, that's not how NAND flash memory works. The data stored in each cell leaks over time; the electric charge used to represent a bit can degrade, and if you don't power on the drive periodically to refresh the NAND cells, those bits can become unreadable. This is called charge leakage, and it gets worse with SSDs using lower-end NAND flash memory. Most consumer SSDs these days use TLC and QLC NAND flash memory, which aren't as great as SLC and MLC SSDs at data retention. ... A sudden power loss during critical write operations can corrupt data blocks and make recovery impossible. That's because SSDs often utilize complex caching mechanisms and intricate wear-leveling algorithms to optimize performance. During an abrupt shutdown, these processes might fail to complete correctly, leaving your data corrupted.


Beyond the Paycheck: Where IT Operations Careers Outshine Software Development

On the whole, working in IT tends to be more dynamic than working as a software developer. As a developer, you're likely to spend the bulk of your time writing code using a specific set of programming languages and frameworks. Your day-to-day, month-to-month, and year-to-year work will center on churning out never-ending streams of application updates. The tasks that fall to IT engineers, in contrast, tend to be more varied. You might troubleshoot a server failure one day and set up a RAID array the next. You might spend part of your day interfacing with end users, then go into strategic planning meetings with executives. ... IT engineers tend to be less abstracted from end users, with whom they often interact on a daily basis. In contrast, software engineers are more likely to spend their time writing code while rarely, if ever, watching someone use the software they produce. As a result, it can be easier in a certain respect for someone working in IT as compared to software development to feel a sense of satisfaction.  ... While software engineers can move into adjacent types of roles, like site reliability engineering, IT operations engineers arguably have a more diverse set of easily pursuable options if they want to move up and out of IT operations work.


Europe is caught in a cloud dilemma

The European Union is worried about its reliance on the leading US-based cloud providers: Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). These large-scale players hold an unrivaled influence over the cloud sector and manage vital infrastructure essential for driving economies and fostering innovation. European policymakers have raised concerns that their heavy dependence exposes the continent to vulnerabilities, constraints, and geopolitical uncertainties. ... Europe currently lacks cloud service providers that can challenge those global Goliaths. Despite efforts like Gaia-X that aim to change this, it’s not clear if Europe can catch up anytime soon. It will be a prohibitively expensive undertaking to build large-scale cloud infrastructure in Europe that is both cost-efficient and competitive. In a nutshell, Europe’s hope to adopt top-notch cloud technology without the countries that currently dominate the industry is impractical, considering current market conditions. ... Often companies view cloud integration as merely a checklist or set of choices to finalize their cloud migration. This frequently results in tangled networks and isolated silos. Instead, businesses should overhaul their existing cloud environment with a comprehensive strategy that considers both immediate needs and future goals as well as the broader geopolitical landscape.


Applying Observability to Leadership to Understand and Explain your Way of Working

Leadership observability means observing yourself as you lead. Alex Schladebeck shared at OOP conference how narrating thoughts, using mind maps, asking questions, and identifying patterns helped her as a leader to explain decisions, check bias, support others, and understand her actions and challenges. Employees and other leaders around you want to understand what leads to your decisions, Schladebeck said. ... Heuristics give us our "gut feeling". And that’s useful, but it’s better if we’re able to take a step back and get explicit about how we got to that gut feeling, Schladebeck mentioned. If we categorise and label things and explain what experiences lead us to our gut feeling, then we have the option of checking our bias and assumptions, and can help others to develop the thinking structures to make their own decisions, she explained ... Schladebeck recommends that leaders narrate their thoughts to reflect on, and describe their own work to the ones they are leading. They can do this by asking themselves questions like, "Why do I think that?", "What assumptions am I basing this on?", "What context factors am I taking into account?" Look for patterns, categories, and specific activities, she advised, and then you can try to explain these things to others around you. To visualize her thinking as a leader, Schladebeck uses mind maps.


Data Mesh: The Solution to Financial Services' Data Management Nightmare

Data mesh is not a technology or architecture, but an organizational and operational paradigm designed to scale data in complex enterprises. It promotes domain-oriented data ownership, where teams manage their data as a product, using a self-service infrastructure and following federated governance principles. In a data mesh, any team or department within an organization becomes accountable for the quality, discoverability, and accessibility of the data products they own. The concept emerged around five years ago as a response to the bottlenecks and limitations created by centralized data engineering teams acting as data gatekeepers. ... In a data mesh model, data ownership and stewardship are assigned to the business domains that generate and use the data. This means that teams such as credit risk, compliance, underwriting, or investment analysis can take responsibility for designing and maintaining the data products that meet their specific needs. ... Data mesh encourages clear definitions of data products and ownership, which helps reduce the bottlenecks often caused by fragmented data ownership or overloaded central teams. When combined with modern data technologies — such as cloud-native platforms, data virtualization layers, and orchestration tools — data mesh can help organizations connect data across legacy mainframes, on-premises databases, and cloud systems.


Accelerating Developer Velocity With Effective Platform Teams

Many platform engineering initiatives fail, not because of poor technology choices, but because they miss the most critical component: genuine collaboration. The most powerful internal developer platforms aren’t just technology stacks; they’re relationship accelerators that fundamentally transform the way teams work together. Effective platform teams have a deep understanding of what a day in the life of a developer, security engineer or operations specialist looks like. They know the pressures these teams face, their performance metrics and the challenges that frustrate them most. ... The core mission of platform teams is to enable faster software delivery by eliminating complexity and cognitive load. Put simply: Make the right way the easiest way. Developer experience extends beyond function; it’s about creating delight and demonstrating that the platform team cares about the human experience, not just technical capabilities. The best platforms craft natural, intuitive interfaces that anticipate questions and incorporate error messages that guide, rather than confuse. Platform engineering excellence comes from making complex things appear simple. It’s not about building the most sophisticated system; it’s about reducing complexity so developers can focus on creating business value.


AI agents will be ambient, but not autonomous - what that means for us

Currently, the AI assistance that users receive is deterministic; that is, humans are expected to enter a command in order to receive an intended outcome. With ambient agents, there is a shift in how humans fundamentally interact with AI to get the desired outcomes they need; the AI assistants rely instead on environmental cues. "Ambient agents we define as agents that are triggered by events, run in the background, but they are not completely autonomous," said Chase. He explains that ambient agents benefit employees by allowing them to expand their magnitude and scale themselves in ways they could not previously do. ... When talking about these types of ambient agents with advanced capabilities, it's easy to become concerned about trusting AI with your data and with executing actions of high importance. To tackle that concern, it is worth reiterating Chase's definition of ambient agents -- they're "not completely autonomous." ... "It's not deterministic," added Jokel. "It doesn't always give you the same outcome, and we can build scaffolding, but ultimately you still ant a human being sitting at the keyboard checking to make sure that this decision is the right thing to do before it gets executed, and I think we'll be in that state for a relatively long period of time."





Daily Tech Digest - April 21, 2025


Quote for the day:

"In simplest terms, a leader is one who knows where he wants to go, and gets up, and goes." -- John Erksine



Two ways AI hype is worsening the cybersecurity skills crisis

Another critical factor in the AI-skills shortage discussion is that attackers are also leveraging AI, putting defenders at an even greater disadvantage. Cybercriminals are using AI to generate more convincing phishing emails, automate reconnaissance, and develop malware that can evade detection. Meanwhile, security teams are struggling just to keep up. “AI exacerbates what’s already going on at an accelerated pace,” says Rona Spiegel, cyber risk advisor at GroScale and former cloud governance leader at Wells Fargo and Cisco. “In cybersecurity, the defenders have to be right all the time, while attackers only have to be right once. AI is increasing the probability of attackers getting it right more often.” ... “CISOs will have to be more tactical in their approach,” she explains. “There’s so much pressure for them to automate, automate, automate. I think it would be best if they could partner cross-functionality and focus on things like policy and urge the unification and simplification of how polices are adapted… and make sure how we’re educating the entire environment, the entire workforce, not just the cybersecurity.” Appayanna echoes this sentiment, arguing that when used correctly, AI can ease talent shortages rather than exacerbate them. 


Data mesh vs. data fabric vs. data virtualization: There’s a difference

“Data mesh is a decentralized model for data, where domain experts like product engineers or LLM specialists control and manage their own data,” says Ahsan Farooqi, global head of data and analytics, Orion Innovation. While data mesh is tied to certain underlying technologies, it’s really a shift in thinking more than anything else. In an organization that has embraced data mesh architecture, domain-specific data is treated as a product owned by the teams relevant to those domains. ... As Matt Williams, field CTO at Cornelis Networks, puts it, “Data fabric is an architecture and set of data services that provides intelligent, real-time access to data — regardless of where it lives — across on-prem, cloud, hybrid, and edge environments. This is the architecture of choice for large data centers across multiple applications.” ... Data virtualization is the secret sauce that can make that happen. “Data virtualization is a technology layer that allows you to create a unified view of data across multiple systems and allows the user to access, query, and analyze data without physically moving or copying it,” says Williams. That means you don’ t have to worry about reconciling different data stores or working with data that’s outdated. Data fabric uses data virtualization to produce that single pane of glass: It allows the user to see data as a unified set, even if that’s not the underlying physical reality.


Biometrics adoption strategies benefit when government direction is clear

Part of the problem seems to be the collision of private and public sector interests in digital ID use cases like right-to-work checks. They would fall outside the original conception of Gov.uk as a system exclusively for public sector interaction, but the business benefit they provide is strictly one of compliance. The UK government’s Office for Digital Identities and Attributes (OfDIA), meanwhile, brought the register of digital identity and attribute services to the public beta stage earlier this month. The register lists services certified to the digital identity and attributes trust framework to perform such compliance checks, and the recent addition of Gov.uk One Login provided the spark for the current industry conflagration. Age checks for access to online pornography in France now require a “double-blind” architecture to protect user privacy. The additional complexity still leaves clear roles, however, which VerifyMy and IDxLAB have partnered to fill. Yoti has signed up a French pay site, but at least one big international player would rather fight the age assurance rules in court. Aviation and border management is one area where the enforcement of regulations has benefited from private sector innovation. Preparation for Digital Travel Credentials is underway with Amadeus pitching its “journey pass” as a way to use biometrics at each touchpoint as part of a reimagined traveller experience. 



Will AI replace software engineers? It depends on who you ask

Effective software development requires "deep collaboration with other stakeholders, including researchers, designers, and product managers, who are all giving input, often in real time," said Callery-Colyne. "Dialogues around nuanced product and user information will occur, and that context must be infused into creating better code, which is something AI simply cannot do." The area where AIs and agents have been successful so far, "is that they don't work with customers directly, but instead assist the most expensive part of any IT, the programmers and software engineers," Thurai pointed out. "While the accuracy has improved over the years, Gen AI is still not 100% accurate. But based on my conversations with many enterprise developers, the technology cuts down coding time tremendously. This is especially true for junior to mid-senior level developers." AI software agents may be most helpful "when developers are racing against time during a major incident, to roll out a fixed code quickly, and have the systems back up and running," Thurai added. "But if the code is deployed in production as is, then it adds to tech debt and could eventually make the situation worse over the years, many incidents later."


Protected NHIs: Key to Cyber Resilience

We live where cyber threats is continually evolving. Cyber attackers are getting smarter and more sophisticated with their techniques. Traditional security measures no longer suffice. NHIs can be the critical game-changer that organizations have been looking for. So, why is this the case? Well, cyber attackers, in the current times, are not just targeting humans but machines as well. Remember that your IT includes computing resources like servers, applications, and services that all represent potential points of attack. Non-Human Identities have bridged the gap between human identities and machine identities, providing an added layer of protection. NHIs security is of utmost importance as these identities can have overarching permissions. One single mishap with an NHI can lead to severe consequences. ... Businesses are significantly relying on cloud-based services for a wide range of purposes, from storage solutions to sophisticated applications. That said, the increasing dependency on the cloud has elucidated the pressing need for more robust and sophisticated security protocols. An NHI management strategy substantially supports this quest for fortified cloud security. By integrating with your cloud services, NHIs ensure secured access, moderated control, and streamlined data exchanges, all of which are instrumental in the prevention of unauthorized accesses and data violations.


Job seekers using genAI to fake skills and credentials

“We’re seeing this a lot with our tech hires, and a lot of the sentence structure and overuse of buzzwords is making it super obvious,” said Joel Wolfe, president of HiredSupport, a California-based business process outsourcing (BPO) company. HiredSupport has more than 100 corporate clients globally, including companies in the eCommerce, SaaS, healthcare, and fintech sectors. Wolfe, who weighed in on the topic on LinkedIn, said he’s seeing AI-enhanced resumes “across all roles and positions, but most obvious in overembellished developer roles.” ... In general, employers generally say they don’t have a problem with applicants using genAI tools to write a resume, as long as it accurately represents a candidate’s qualifications and experience. ZipRecruiter, an online employment marketplace, said 67% of 800 employers surveyed reported they are open to candidates using genAI to help write their resumes, cover letters, and applications, according to its Q4 2024 Employer Report. Companies, however, face a growing threat from fake job seekers using AI to forge IDs, resumes, and interview responses. By 2028, a quarter of job candidates could be fake, according to Gartner Research. Once hired, impostors can then steal data, money, or install ransomware. ... Another downside to the growing flood of AI deep fake applicants is that it affects “real” job applicants’ chances of being hired.


How Will the Role of Chief AI Officer Evolve in 2025?

For now, the role is less about exploring the possibilities of AI and more about delivering on its immediate, concrete value. “This year, the role of the chief AI officer will shift from piloting AI initiatives to operationalizing AI at scale across the organization,” says Agarwal. And as for those potential upheavals down the road? CAIO officers will no doubt have to be nimble, but Martell doesn’t see their fundamental responsibilities changing. “You still have to gather the data within your company to be able to use with that model and then you still have to evaluate whether or not that model that you built is delivering against your business goals. That has never changed,” says Martell. ... AI is at the inflection point between hype and strategic value. “I think there's going to be a ton of pressure to find the right use cases and deploy AI at scale to make sure that we're getting companies to value,” says Foss. CAIOs could feel that pressure keenly this year as boards and other executive leaders increasingly ask to see ROI on massive AI investments. “Companies who have set these roles up appropriately, and more importantly the underlying work correctly, will see the ROI measurements, and I don't think that chief AI officers [at those] organizations should feel any pressure,” says Mohindra.


Cybercriminals blend AI and social engineering to bypass detection

With improved attack strategies, bad actors have compressed the average time from initial access to full control of a domain environment to less than two hours. Similarly, while a couple of years ago it would take a few days for attackers to deploy ransomware, it’s now being detonated in under a day and even in as few as six hours. With such short timeframes between the attack and the exfiltration of data, companies are simply not prepared. Historically, attackers avoided breaching “sensitive” industries like healthcare, utilities, and critical infrastructures because of the direct impact to people’s lives.  ... Going forward, companies will have to reconcile the benefits of AI with its many risks. Implementing AI solutions expands a company’s attack surface and increases the risk of data getting leaked or stolen by attackers or third parties. Threat actors are using AI efficiently, to the point where any AI employee training you may have conducted is already outdated. AI has allowed attackers to bypass all the usual red flags you’re taught to look for, like grammatical errors, misspelled words, non-regional speech or writing, and a lack of context to your organization. Adversaries have refined their techniques, blending social engineering with AI and automation to evade detection. 


AI in Cybersecurity: Protecting Against Evolving Digital Threats

As much as AI bolsters cybersecurity defenses, it also enhances the tools available to attackers. AI-powered malware, for example, can adapt its behavior in real time to evade detection. Similarly, AI enables cybercriminals to craft phishing schemes that mimic legitimate communications with uncanny accuracy, increasing the likelihood of success. Another alarming trend is the use of AI to automate reconnaissance. Cybercriminals can scan networks and systems for vulnerabilities more efficiently than ever before, highlighting the necessity for cybersecurity teams to anticipate and counteract AI-enabled threats. ... The integration of AI into cybersecurity raises ethical questions that must be addressed. Privacy concerns are at the forefront, as AI systems often rely on extensive data collection. This creates potential risks for mishandling or misuse of sensitive information. Additionally, AI’s capabilities for surveillance can lead to overreach. Governments and corporations may deploy AI tools for monitoring activities under the guise of security, potentially infringing on individual rights. There is also the risk of malicious actors repurposing legitimate AI tools for nefarious purposes. Clear guidelines and robust governance are crucial to ensuring responsible AI deployment in cybersecurity.


AI workloads set to transform enterprise networks

As AI companies leapfrog each other in terms of capabilities, they will be able to handle even larger conversations — and agentic AI may increase the bandwidth requirements exponentially and in unpredictable ways. Any website or app could become an AI app, simply by adding an AI-powered chatbot to it, says F5’s MacVittie. When that happens, a well-defined, structured traffic pattern will suddenly start looking very different. “When you put the conversational interfaces in front, that changes how that flow actually happens,” she says. Another AI-related challenge that networking managers will need to address is that of multi-cloud complexity. ... AI brings in a whole host of potential security problems for enterprises. The technology is new and unproven, and attackers are quickly developing new techniques for attacking AI systems and their components. That’s on top of all the traditional attack vectors, says Rich Campagna, senior vice president of product management at Palo Alto Networks. At the edge, devices and networks are often distributed which leads to visibility blind spots,” he adds. That makes it harder to fix problems if something goes wrong. Palo Alto is developing its own AI applications, Campagna says, and has been for years. And so are its customers. 


Daily Tech Digest - November 22, 2024

AI agents are coming to work — here’s what businesses need to know

Defining exactly what an agent is can be tricky, however: LLM-based agents are an emerging technology, and there’s a level of variance in the sophistication of tools labelled as “agents,” as well as how related terms are applied by vendors and media. And as with the first wave of generative AI (genAI) tools, there are question marks around how businesses will use the technology. ... With so many tools in development or coming to the market, there’s a certain amount of confusion among businesses that are struggling to keep pace. “The vendors are announcing all of these different agents, and you can imagine what it’s like for the buyers: instead of ‘The Russians are coming, the Russians are coming,’ it’s ‘the agents are coming, the agents are coming,’” said Loomis. “They’re being bombarded by all of these new offerings, all of this new terminology, and all of these promises of productivity.” Software vendors also offer varying interpretations of the term “agent” at this stage, and tools coming to market exhibit a broad spectrum of complexity and autonomy. ... Many of the agent builder tools coming to business and work apps require little or no expertise. This accessibility means a wide range of workers could manage and coordinate their own agents.


The limits of AI-based deepfake detection

In terms of inference-based detection, ground truth is never known and assumed as such, so detection is based on a one to ninety-nine percentage that the content in question is or is not likely manipulated. Inference-based platform needs no buy-in from platforms, but instead needs robust models trained on a wide variety of deepfaking techniques and technologies in various use cases and circumstances. To stay ahead of emerging threat vectors and groundbreaking new models, those making an inference-based solution can look to emerging gen AI research to implement such methods into detection models as or before such research becomes productized. ... Greater public awareness and education will always be of immense importance, especially in places where content is consumed that could potentially be deepfaked or artificially manipulated. Yet deepfakes are getting so convincing, so realistic that even storied researchers now have a hard time differentiating real from fake simply by looking at or listening to a media file. This is how advanced deepfakes have become, and they will only continue to grow in believability and realism. This is why it is crucial to implement deepfake detection solutions in the aforementioned content platforms or anywhere deepfakes can and do exist. 


Quantum error correction research yields unexpected quantum gravity insights

So far, scientists have not found a general way of differentiating trivial and non-trivial AQEC codes. However, this blurry boundary motivated Liu, Daniel Gottesman of the University of Maryland, US; Jinmin Yi of Canada’s Perimeter Institute for Theoretical Physics; and Weicheng Ye at the University of British Columbia, Canada, to develop a framework for doing so. To this end, the team established a crucial parameter called subsystem variance. This parameter describes the fluctuation of subsystems of states within the code space, and, as the team discovered, links the effectiveness of AQEC codes to a property known as quantum circuit complexity. ... The researchers also discovered that their new AQEC theory carries implications beyond quantum computing. Notably, they found that the dividing line between trivial and non-trivial AQEC codes also arises as a universal “threshold” in other physical scenarios – suggesting that this boundary is not arbitrary but rooted in elementary laws of nature. One such scenario is the study of topological order in condensed matter physics. Topologically ordered systems are described by entanglement conditions and their associated code properties. 


Towards greener data centers: A map for tech leaders

The transformation towards sustainability can be complex, involving key decisions about data center infrastructure. Staying on-premises offers control over infrastructure and data but poses questions about energy sourcing. Shifting to hybrid or cloud models can leverage the innovations and efficiencies of hyperscalers, particularly regarding power management and green energy procurement. One of the most significant architectural advancements in this context is hyperconverged infrastructure (HCI). As we know, traditionally data centers operate using a three-tier architecture comprising separate servers, storage, and network equipment. This model, though reliable, has clear limitations in terms of energy consumption and cooling efficiency. By merging the server and storage layers, HCI reduces both the power demands and the associated cooling requirements. ... The drive to create more efficient and environmentally conscious data centers is not just about cost control; it’s also about meeting the expectations of regulators, customers, and stakeholders. As AI and other compute-intensive technologies continue to proliferate, organizations must reassess their infrastructure strategies, not just to meet sustainability goals but to remain competitive.


What is a data architect? Skills, salaries, and how to become a data framework master

The data architect and data engineer roles are closely related. In some ways, the data architect is an advanced data engineer. Data architects and data engineers work together to visualize and build the enterprise data management framework. The data architect is responsible to visualize the blueprint of the complete framework that data engineers then build. ... Data architect is an evolving role and there’s no industry-standard certification or training program for data architects. Typically, data architects learn on the job as data engineers, data scientists, or solutions architects, and work their way to data architect with years of experience in data design, data management, and data storage work. ... Data architects must have the ability to design comprehensive data models that reflect complex business scenarios. They must be proficient in conceptual, logical, and physical model creation. This is the core skill of the data architect and the most requested skill in data architect job descriptions. This often includes SQL development and database administration. ... With regulations continuing to evolve, data architects must ensure their organization’s data management practices meet stringent legal and ethical standards. They need skills to create frameworks that maintain data quality, security, and privacy.


AI – Implementing the Right Technology for the Right Use Case

Right now, we very much see AI in this “peak of inflated expectations” phase and predict that it will dip into the “trough of disillusionment”, where organizations realize that it is not the silver bullet they thought it would be. In fact, there are already signs of cynicism as decision-makers are bombarded with marketing messages from vendors and struggle to discern what is a genuine use case and what is not relevant for their organization. This is a theme that also emerged as cybersecurity automation matured – the need to identify the right use case for the technology, rather than try to apply it across the board.. ... That said, AI is and will continue to be a useful tool. In today’s economic climate, as businesses adapt to a new normal of continuous change, AI—alongside automation—can be a scale function for cybersecurity teams, enabling them to pivot and scale to defend against evermore diverse attacks. In fact, our recent survey of 750 cybersecurity professionals found that 58% of organizations are already using AI in cybersecurity to some extent. However, we do anticipate that AI in cybersecurity will pass through the same adoption cycle and challenges experienced by “the cloud” and automation, including trust and technical deployment issues, before it becomes truly productive. 


A GRC framework for securing generative AI

Understanding the three broad categories of AI applications is just the beginning. To effectively manage risk and governance, further classification is essential. By evaluating key characteristics such as the provider, hosting location, data flow, model type, and specificity, enterprises can build a more nuanced approach to securing AI interactions. A crucial factor in this deeper classification is the provider of the AI model. ... As AI technology advances, it brings both transformative opportunities and unprecedented risks. For enterprises, the challenge is no longer whether to adopt AI, but how to govern AI responsibly, balancing innovation against security, privacy, and regulatory compliance. By systematically categorizing generative AI applications—evaluating the provider, hosting environment, data flow, and industry specificity—organizations can build a tailored governance framework that strengthens their defenses against AI-related vulnerabilities. This structured approach enables enterprises to anticipate risks, enforce robust access controls, protect sensitive data, and maintain regulatory compliance across global jurisdictions. The future of enterprise AI is about more than just deploying the latest models; it’s about embedding AI governance deeply into the fabric of the organization.


Business Continuity Depends on the Intersection of Security and Resilience

The focus of security, or the goal of security, or the intended purpose of security in its most natural and traditional form, right before we start to apply it to other things, is to prevent bad things from happening, or protect the organization or protect assets. It doesn't necessarily have to be technology that does it. This is where your policies and procedures come into place. Letting users know what acceptable use policies are or what things are accepted when leveraging corporate resources. From a technology perspective, it's your firewalls, antivirus, intrusion detection systems and things of that nature. So, this is where we focus on good cyber hygiene. We're controlling the controllables and making sure that we're taking care of the things that are within our control. What about resilience? This one is near and dear to my heart. That's because I've been in tech and security for almost 25 years, and I've kind of gone through this evolution of what I think is important. We're trained as practitioners in this industry to believe that the goal is to reduce risk. We must reduce or mitigate cyber risk, or we can make other risk decisions. We can avoid it, we can accept it, or we can transfer it. But practically speaking, when we show up to work every day and we're doing something active, we're reducing risk.


How to stop data mesh turning into a data mess

Realistically, expecting employees to remember to follow data quality and compliance guidelines is neither fair nor enforceable. Adherence must be implemented without frustrating users, and become an integral part of the project delivery process. Unlikely as this sounds, a computational governance platform can impose the necessary standards as ‘guardrails’ while also accelerating the time to market of products. Sitting above an organisation’s existing range of data enablement and management tools, a computational governance platform ensures every project follows pre-determined policies, for quality, compliance, security, and architecture. Highly customisable standards can be set at global or local levels, whatever is required. ... While this might seem restrictive, there are many benefits from having a standardised way of working. To streamline processes, intelligent automated templates help data practitioners quickly initiate new projects and search for relevant data. The platform can oversee the deployment of data products by checking their compliance and taking care of the resource provisioning, freeing the teams from the burden of coping with infrastructure technicalities (on cloud or on-prem) and certifying data product compliance at the same time, before data products enter production. 


The SEC Fines Four SolarWinds Breach Victims

Companies should ensure the cyber and data security information they share within their organizations is consistent with what they share with government agencies, shareholders and the public, according to Buchanan Ingersoll & Rooney’s Sanger. This applies to their security posture prior to a breach, as well as their responses afterward. “Consistent messaging is difficult to manage given that dozens, hundreds or thousands could be responsible for an organization’s cybersecurity. Investigators will always be able to find a dissenting or more pessimistic outlook among the voices involved,” says Sanger. “If there is a credible argument that circumstances are or were worse than what the organization shares publicly, leadership should openly acknowledge it and take steps to justify the official perspective.” Corporate cybersecurity breach reporting is still relatively uncharted territory, however. “Even business leaders who intend to act with complete transparency can make inadvertent mistakes or communicate poorly, particularly because the language used to discuss cybersecurity is still developing and differs between communities,” says Sanger. “It’s noteworthy that the SEC framed each penalized company as having, ‘negligently minimized its cybersecurity incident in its public disclosures.’ 



Quote for the day:

"Perfection is not attainable, but if we chase perfection we can catch excellence." -- Vince Lombardi

Daily Tech Digest - November 14, 2024

Where IT Consultancies Expect to Focus in 2025

“Much of what’s driving conversations around AI today is not just the technology itself, but the need for businesses to rethink how they use data to unlock new opportunities,” says Chaplin. “AI is part of this equation, but data remains the foundation that everything else builds upon.” West Monroe also sees a shift toward platform-enabled environments where software, data, and platforms converge. “Rather than creating everything from scratch, companies are focusing on selecting, configuring, and integrating the right platforms to drive value. The key challenge now is helping clients leverage the platforms they already have and making sure they can get the most out of them,” says Chaplin. “As a result, IT teams need to develop cross-functional skills that blend software development, platform integration and data management. This convergence of skills is where we see impact -- helping clients navigate the complexities of platform integration and optimization in a fast-evolving landscape.” ... “This isn’t just about implementing new technologies, it’s about preparing the workforce and the organization to operate in a world where AI plays a significant role. ...”


How Is AI Shaping the Future of the Data Pipeline?

AI’s role in the data pipeline begins with automation, especially in handling and processing raw data – a traditionally labor-intensive task. AI can automate workflows and allow data pipelines to adapt to new data formats with minimal human intervention. With this in mind, Harrisburg University is actively exploring AI-driven tools for data integration that leverage LLMs and machine learning models to enhance and optimize ETL processes, including web scraping, data cleaning, augmentation, code generation, mapping, and error handling. These adaptive pipelines, which automatically adjust to new data structures, allow companies to manage large and evolving datasets without the need for extensive manual coding. ... Beyond immediate operational improvements, AI is shaping the future of scalable and sustainable data pipelines. As industries collect data at an accelerating rate, traditional pipelines often struggle to keep pace. AI’s ability to scale data handling across various formats and volumes makes it ideal for supporting industries with massive data needs, such as retail, logistics, and telecommunications. In logistics, for example, AI-driven pipelines streamline inventory management and optimize route planning based on real-time traffic data. 


Innovating with Data Mesh and Data Governance

Companies choose a data mesh to overcome the limitations of “centralized and monolithic” data platforms, as noted by Zhamak Dehghani, the director of emerging technologies at Thoughtworks. Technologies like data lakes and warehouses try to consolidate all data in one place, but enterprises can find that the data gets stuck there. A company might have only one centralized data repository – typically a team such as IT – that serves the data up to everyone else in the company. This slows down data access because of bottlenecks. For example, having already taken days to get HR privacy approval, the finance department’s data access requests might then sit in the inbox of one or two people in IT for additional days. Instead, a data mesh puts data control in the hands of each domain that serves that data. Subject matter experts (SMEs) in the domain control how this data is organized, managed, and delivered. ... Data mesh with federated Data Governance balances expertise, flexibility, and speed with data product interoperability among different domains. With a data mesh, the people with the most knowledge about their subject matter take charge of their data. In the future, organizations will continue to face challenges in providing good, federated Data Governance to access data through a data mesh.


The Agile Manifesto was ahead of its time

A fundamental idea of the agile methodology is to alleviate this and allow for flexibility and changing requirements. The software development process should ebb and flow as features are developed and requirements change. The software should adapt quickly to these changes. That is the heart and soul of the whole Agile Manifesto. However, when the Agile Manifesto was conceived, the state of software development and software delivery technology was not flexible enough to fulfill what the manifesto was espousing. But this has changed with the advent of the SaaS (software as a service) model. It’s all well and good to want to maximize flexibility, but for many years, software had to be delivered all at once. Multiple features had to be coordinated to be ready for a single release date. Time had to be allocated for bug fixing. The limits of the technology forced software development teams to be disciplined, rigid, and inflexible. Delivery dates had to be met, after all. And once the software was delivered, changing it meant delivering all over again. Updates were often a cumbersome and arduous process. A Windows program of any complexity could be difficult to install and configure. Delivering or upgrading software at a site with 200 computers running Windows could be a major challenge.


Improving the Developer Experience by Deploying CI/CD in Databases

Characteristically less mature than CI/CD for application code, CI/CD for databases enables developers to manage schema updates such as changes to table structures and relationships. This management ability means developers can execute software updates to applications quickly and continuously without disrupting database users. It also helps improve quality and governance, creating a pipeline everyone follows. The CI stage typically involves developers working on code simultaneously, helping to fix bugs and address integration issues in the initial testing process. With the help of automation, businesses can move faster, with fewer dependencies and errors and greater accuracy — especially when backed up by automated testing and validation of database changes. Human intervention is not needed, resulting in fewer hours spent on change management. ... Deploying CI/CD for databases empowers developers to focus on what they do best: Building better applications. Businesses today should decide when, not if, they plan to implement these practices. For development leaders looking to start deploying CI/CD in databases, standardization — such as how certain things are named and organized — is a solid first step and can set the stage for automation in the future. 


To Dare or not to Dare: the MVA Dilemma

Business stakeholders must understand the benefits of technology experiments in terms they are familiar with, regarding how the technology will better satisfy customer needs. Operations stakeholders need to be satisfied that the technology is stable and supportable, or at least that stability and supportability are part of the criteria that will be used to evaluate the technology. Wholly avoiding technology experiments is usually a bad thing because it may miss opportunities to solve business problems in a better way, which can lead to solutions that are less effective than they would be otherwise. Over time, this can increase technical debt. ... These trade-offs are constrained by two simple truths: the development team doesn’t have much time to acquire and master new technologies, and they cannot put the business goals of the release at risk by adopting unproven or unsustainable technology. This often leads the team to stick with tried-and-true technologies, but this strategy also has risks, most notably those of the hammer-nail kind in which old technologies that are unsuited to novel problems are used anyway, as in the case where relational databases are used to store graph-like data structures.


2025 API Trend Reports: Avoid the Antipattern

Modern APIs aren’t all durable, full-featured products, and don’t need to be. If you’re taking multiple cross-functional agile sprints to design an API you’ll use for less than a year, you’re wasting resources building a system that will probably be overspecified and bloated. The alternative is to use tools and processes centered around an API developer’s unit of work, which is a single endpoint. No matter the scope or lifespan of an API, it will consist of endpoints, and each of those has to be written by a developer, one at a time. It’s another way that turning back to the fundamentals can help you adapt to new trends. ... Technology will keep evolving, and the way we employ AI might look quite different in a few years. Serverless architecture is the hot trend now, but something else will eventually overtake it. No doubt, cybercriminals will keep surprising us with new attacks. Trends evolve, but underlying fundamentals — like efficiency, the need for collaboration, the value of consistency and the need to adapt — will always be what drives business decisions. For the API industry, the key to keeping up with trends without sacrificing fundamentals is to take a developer-centric approach. Developers will always create the core value of your APIs. 


The targeted approach to cloud & data - CIOs' need for ROI gains

AI and DaaS are part of the pool of technologies that Pacetti also draws on, and the company also uses AI provided by Microsoft, both with ChatGPT and Copilot. Plus, AI has been integrated into the e-commerce site to support product research and recommendations. But there’s an even more essential area for Pacetti.“With the end of third-party cookies, AI is now essential to exploit the little data we can capture from the internet user browsing who accept tracking,” he says. “We use Google’s GA4 to compensate for missing analytics data, for example, by exploiting data from technical cookies.” ... CIOs discuss sales targets with CEOs and the board, cementing the IT and business bond. But another even more innovative aspect is to not only make IT a driver of revenues, but also have it measure IT with business indicators. This is a form of advanced convergence achieved by following specific methodologies. Sondrio People’s Bank (BPS), for example, adopted business relationship management, which deals with translating requests from operational functions to IT and, vice versa, bringing IT into operational functions. BPS also adopts proactive thinking, a risk-based framework for strategic alignment and compliance with business objectives. 


Hidden Threats Lurk in Outdated Java

How important are security updates? After all, Java is now nearly 30 years old; haven’t we eliminated all the vulnerabilities by now? Sadly not, and realistically, that will never happen. OpenJDK contains 7.5 million lines of code and relies on many external libraries, all of which can be subject to undiscovered vulnerabilities. ... Since Oracle changed its distributions and licensing, there have been 22 updates. Of these, six PSUs required a modification and new release to address a regression that had been introduced. The time to create the new update has varied from just under two weeks to over five weeks. At no time have any of the CPUs been affected like this. Access to a CPU is essential to maintain the maximum level of security for your applications. Since all free binary distributions of OpenJDK only provide the PSU version, some users may consider a couple of weeks before being able to deploy as an acceptable risk. ... When an update to the JDK is released, all vulnerabilities addressed are disclosed in the release notes. Bad actors now have information enabling them to try and find ways to exploit unpatched applications.


How to defend Microsoft networks from adversary-in-the-middle attacks

Depending on the impact of the attack, start the cleanup process. Start by forcing a password change on the user account, ensuring that you have revoked all tokens to block the attacker’s fake credentials. If the consequences of the attack were severe, consider disabling the user’s primary account and setting up a new temporary account as you investigate the extent of the intrusion. You may even consider quarantining the user’s devices and potentially taking forensic-level backups of workstations if you are unsure of the original source of the intrusion so you can best investigate. Next review all app registrations, changes to service principals, enterprise apps, and anything else the user may have changed or impacted since the time the intrusion was noted. You’ll want to do a deep investigation into the mailbox’s access and permissions. Mandiant has a PowerShell-based script that can assist you in investigating the impact of the intrusion “This repository contains a PowerShell module for detecting artifacts that may be indicators of UNC2452 and other threat actor activity,” Mandiant notes. “Some indicators are ‘high-fidelity’ indicators of compromise, while other artifacts are so-called ‘dual-use’ artifacts.”



Quote for the day:

"To think creatively, we must be able to look afresh to at what we normally take for granted." -- George Kneller