Showing posts with label oAuth. Show all posts
Showing posts with label oAuth. Show all posts

Daily Tech Digest - March 04, 2026


Quote for the day:

“The secret to success is good leadership, and good leadership is all about making the lives of your team members or workers better." -- Tony Dungy


Composable infrastructure and build-to-fit IT: From standard stacks to policy-defined intent

Fixed stacks turn into friction. They are either too heavy for small workloads or too rigid for fast-changing ones. Teams start to fork the standard build “just this once,” and suddenly the exception becomes the default. That is how sprawl begins. Composable infrastructure is the most practical way I have found to break that cycle, but only if we stop defining “composable” as modular hardware. The differentiator is not the pool of compute, storage or fabric. The differentiator is the control plane: The policy, automation and governance that make composition safe, repeatable and reversible. ... The moment you move from “stacks” to “building blocks,” the control plane becomes the product you operate. At a minimum, I expect the control plane to do the following: Translate intent into infrastructure using declarative definitions (infrastructure as code) and reusable compositions; Enforce policy as code consistently across pipelines and runtime; Prevent drift and continuously reconcile desired state; ... The “sprawl prevention” mechanisms that matter: Every composed environment has a time-to-live by default. If it is not renewed by policy, it is retired automatically; Policies require standard tags (application, owner, cost center, data classification). If tags are missing, provisioning fails early; Network exposure is deny-by-default. Public endpoints require explicit approval paths and documented intent; 


Why workforce identity is still a vulnerability, and what to do about it

Workforce identity is strongest at the moment of proofing. The risk isn’t usually malicious insiders slipping through onboarding. It’s what happens when verified identity is decoupled from account creation, daily access, and recovery. Manual handoffs are a common culprit. Identity is verified in one system, then an account is provisioned in another, often with human intervention in between. Temporary passwords are issued. Activation links are sent by email. Credentials are reset by help desk staff relying on judgment instead of evidence. ... If there is a single place where workforce identity collapses most consistently, it’s account recovery. Password resets, MFA re-enrollment, and help desk changes are designed to restore access quickly. In practice, they often bypass the very controls organizations rely on elsewhere. Knowledge-based questions, email verification, and voice-only confirmation remain common, even as attackers automate social engineering at scale. Help desk staff are placed in an impossible position. They are expected to verify identity without reliable evidence, under pressure to resolve issues quickly, using channels that are increasingly easy to spoof. ... Workforce identity assurance should begin with strong proofing, but it can’t stop there. Organizations need to deliberately preserve and periodically revalidate trust at key moments in the identity lifecycle, such as account creation, privilege changes, device enrollment, and recovery. 


Microsoft: Hackers abuse OAuth error flows to spread malware

In the campaigns observed by Microsoft, the attackers create malicious OAuth applications in a tenant they control and configure them with a redirect URI pointing to their infrastructure. ... The researchers say that even if the URLs for Entra ID look like legitimate authorization requests, the endpoint is invoked with parameters for silent authentication without an interactive login and an invalid scope that triggers authentication errors. This forces the identity provider to redirect users to the redirect URI configured by the attacker. In some cases, the victims are redirected to phishing pages powered by attacker-in-the-middle frameworks such as EvilProxy, which can intercept valid session cookies to bypass multi-factor authentication (MFA) protections. Microsoft found that the ‘state’ parameter was misused to auto-fill the victim’s email address in the credentials box on the phishing page, increasing the perceived sense of legitimacy. ... Microsoft suggests that organizations should tighten permissions for OAuth applications, enforce strong identity protections and Conditional Access policies, and use cross-domain detection across email, identity, and endpoints. The company highlights that the observed attacks are identity-based threats that abuse an intended behavior in the OAuth framework that behaves as specified by the standard defining how authorization errors are managed through redirects.


Designing infrastructure for AI that actually works

Running AI at scale has real consequences for the systems underneath. The hardware is different, the density is higher, the heat output is significant, and power consumption is a more critical consideration than ever before. This affects everything from rack layouts to grid demand. ... Many AI workloads perform better when they are run locally. Inference applications, like real-time fraud detection, conversational interfaces, and live monitoring, benefit from lower latency and greater data control. This is driving demand for edge computing data centres that can operate independently, handle dense processing loads, and integrate into wider enterprise systems without excessive complexity. ... no template can replace a clear understanding of the use case. The type of model, the data sources, and the required response time, all shape what the critical digital infrastructure needs to deliver. Infrastructure leaders should be involved early in AI planning conversations. Their input can reduce rework, manage costs, and help the organisation avoid disruption from systems that fail under load. Sustainability is no longer optional As AI drives up energy use, scrutiny will follow. Efficiency targets are constantly being tightened across Europe, with new benchmarks being introduced for both new and existing data centre facilities. Regulators want to see measurable improvement, not just strategy slides. ... The organisations that succeed with AI at scale are often the ones that treat infrastructure as a first-order concern. 


Context Engineering is the Key to Unlocking AI Agents in DevOps

Context engineering represents an architectural shift from viewing prompts as static strings to treating context as a dynamic, managed resource. This discipline encompasses three core competencies that separate production-grade agents from experimental toys. ... Structured Memory Architectures implement the 12-Factor Agent principles: semantic memory for infrastructure facts, episodic memory for past incident patterns, and procedural memory for runbook execution. Rather than maintaining monolithic conversation histories, production agents externalize state to vector stores and structured databases, injecting only necessary context at each decision point. ... Organizations transitioning to context-engineered agents should begin with observability. Instrument existing agents to track context growth patterns, identifying which tool calls generate bloated outputs and which historical contexts prove irrelevant. This data drives selective context strategies. Next, implement external memory architectures. Vector databases like Pinecone or Weaviate store semantic infrastructure knowledge; graph databases maintain dependency relationships; time-series databases track operational history. Agents query these systems contextually rather than maintaining monolithic state. Finally, adopt MCP incrementally. Start with non-critical internal tools, exposing them through MCP servers to establish patterns for authentication, context isolation, and monitoring. 


LLMs can unmask pseudonymous users at scale with surprising accuracy

The findings have the potential to upend pseudonymity, an imperfect but often sufficient privacy measure used by many people to post queries and participate in sometimes sensitive public discussions while making it hard for others to positively identify the speakers. The ability to cheaply and quickly identify the people behind such obscured accounts opens them up to doxxing, stalking, and the assembly of detailed marketing profiles that track where speakers live, what they do for a living, and other personal information. This pseudonymity measure no longer holds. ... Unlike those older pseudonymity-stripping methods, Lermen said, AI agents can browse the web and interact with it in many of the same ways humans do. They can use simulated reasoning to match potential individuals. In one experiment, the researchers looked at responses given in a questionnaire Anthropic took about how various people use AI in their daily lives. Using the information taken from answers, the researchers were able to positively identify 7 percent of 125 participants. ... If LLMs’ success in deanonymizing people improves, the researchers warn, governments could use the techniques to unmask online critics, corporations could assemble customer profiles for “hyper-targeted advertising,” and attackers could build profiles of targets at scale to launch highly personalized social engineering scams.


What is digital employee experience — and why is it more important than ever?

Digital employee experience is a measure of how workers perceive and interact with the many digital tools and services they use in the workplace. It examines how employees feel about these technologies, including systems, software, and devices. Enterprises can deploy a DEX strategy that focuses on tracking, assessing, and improving employees’ technology experience, with the aim of increasing productivity and worker satisfaction. ... “DEX matters because the workplace is primarily digital for most employees, and friction creates compounding impact,” says Dan Wilson, vice president and research analyst, digital workplace, at research firm Gartner. Digital friction, not technology outages, has become the primary employee problem to manage, Wilson says. Brought on by fragmented technology deployments, inconsistent workflows, and other factors, “friction accumulates when employees can’t find information, miss updates, or work without context,” he says. ... “Most digital friction is invisible to IT because employees adapt instead of escalating,” Wilson says. “Friction accumulates across devices, apps, identity, workflows, and support, not in silos. These are not necessarily new issues, but the impact on the workforce increases as employees are increasingly dependent on technology to perform their work tasks.” ... While DEX tools can safely be used by non-IT teams, and some leading organizations do this, it’s not yet a common practice due to “limited IT maturity and collaboration” with the technology, Wilson says.


From 20 Lives an Hour to Zero: Can AI Power India’s Road Safety Reset?

India has made a clear and ambitious commitment. Under the Stockholm Declaration, the country aims to reduce road accident fatalities by 50% by 2030. But the numbers remind us how urgent this mission is. ... From a tech lens, the missing piece on the ground is continuous risk detection with immediate correction, at scale. Think of it like this, if the only time a driver feels the consequence of risk is at a checkpoint, behaviour changes briefly. When the “nudge” happens during the risky moment, exactly when speed crosses a certain threshold, or when the driver gets distracted, or when the following distance collapses, the behaviour of the driver changes more consistently because the driver can self-correct in the exact moment. Hence, the conversation has been shifting from “recordings & post analysis” to “faster, real-time and in-cab alerts” and a coaching loop that is actually sustainable. ... Most serious incidents don’t come out of nowhere. They come from a few ordinary seconds where risk stacks up, like a closing gap, a brief glance away, or fatigue building near the end of a shift. If you only sample driving periodically, you miss those sequences. If you only rely on post-trip analytics, you learn what happened after the fact, when the driver no longer has a chance to correct that moment. That is why analysing 100% of driving time matters. It captures what led up to risk, how often it repeats, and under what conditions it shows up. 


Europe’s data center market booms: is it ready to take on the US?

If Europe wants technology to be a success for European companies, the capital must also come from Europe. The fact is that investors in America are generally able and willing to take significantly more risk than investors in Europe. Winterson is well aware of this, of course. He does believe that there are currently more “Europeans who want technology that helps Europeans become better at what they do.” ... Technological services are highly fragmented within Europe, and there is also a lack of a capital market of any substance. Finally, according to the report, there is no competitive energy market. These were and are issues that had to be resolved before more investment could come in. According to Winterson, the European Commission is now working quickly to resolve these issues. In his opinion, this is never fast enough, but the discussion surrounding sovereignty and dependence on technology from other parts of the world is certainly accelerating this process. ... It seems certain to us that data center capacity will increase significantly in the coming years. However, the question remains whether we in Europe can keep up with other parts of the world, particularly America. Winterson readily admits that investments from that corner in Europe will not decline very quickly. Based on the current distribution, we estimate that this would not be desirable either. It would leave a considerable gap.


Epic Fury introduces new layer of enterprise risk

Enterprise emergency action groups should already be validating assumptions and aligning organizational plans as conditions evolve. Today, however, that work becomes mandatory. This is a posture adjustment moment for all organizations that could be impacted by Operation Epic Fury and Iran’s response, not a wait and see moment. ... In post‑incident reviews, the pattern is consistent: Once tensions rise or conflict begins, civil aviation and maritime logistics become targeted, high‑impact levers for creating economic and political pressure. They are symbolic, visible, and deeply tied to global business operations. Any itinerary that transits the Gulf or relies on regional airspace or shipping lanes carries elevated risk. ... Iran’s cyber capability is not speculative; it is documented across years of joint advisories from CISA, FBI, NSA, and their international partners. Iranian state‑aligned actors routinely target poorly secured networks, internet‑connected devices, and critical infrastructure, often exploiting edge appliances, outdated software, and weak credentials. They have conducted disruptive operations against operational technology (OT) devices and have collaborated with ransomware affiliates to turn initial access into revenue or leverage. ... The practical point is simple: Iran’s cyber activity accelerates during periods of geopolitical tension, and enterprises with exposed services, unpatched infrastructure, or unmanaged edge devices become part of the accessible attack surface.

Daily Tech Digest - January 16, 2024

Why Pre-Skilling, Not Reskilling, Is The Secret To Better Employment Pipelines

In a landscape where the relevance of skills evolves, Zaslavski says that organizations should focus on selecting and advancing individuals based on their potential for learning skills like critical thinking and resiliency, instead of focusing on hard skills like coding. ... “By concentrating on these fundamental elements, as opposed to current technical proficiency or past work history, organizations position themselves with an agile and future-ready workforce. In this light, pre-skilling should be an integral part of employers’ talent strategy pre and post-hiring, from sourcing and recruiting to career pathing and employee engagement.” ... She points to areas like understanding if a potential or existing employee has the EQ and social skills needed to perform as part of a group. Or whether they have the curiosity and analytical intelligence needed to learn new hard skills as well as the ambition and work ethic to achieve results. “When people have learning ability, drive, and people skills, they will probably develop new skills faster than others,” she says.


Agile is a concept we all continuously talk about, but what is it really?

Empiricism, teams, user stories, iterations; they are all examples of tools that we use in Agile, but they are not its purpose. Agile is about empowering people to take control of their environment and give them complete freedom to discover how to use available tools in the most effective way. And this applies to the why too. People adopt Agile to increase efficiency, transparency, velocity, predictability, quality. But again all these are a result of Agile, not its goal. It is the mindset that makes it all possible. That is why it is “People and interactions above processes and tools”. To illustrate this, think about empiricism itself. Try introducing empiricism into an organisation mired in a culture of fear and control, and it doesn’t work, no matter what you do. You can’t force empiricism. People are too busy evading blame and manipulating information. Think about it, how often do people complain that the retrospective doesn’t deliver anything? Retrospectives where people just complain and nothing changes? 


What Will It Take to Adopt Secure by Design Principles?

What does the future of secure by design adoption look like? CISA is continuing its work alongside industry partners. “Part of our strategy is to collect data on attacks and understand what that data is telling us about risk and impact and derive further best practices and work with companies, and really other nations, to adopt these principles,” Zabierek shares. International collaboration on secure by design is reflected not only in this CISA initiative but also the Guidelines for Secure AI System Development. CISA and the UK’s National Cyber Security Centre (NCSC) led the development of those guidelines, and 16 other countries have agreed to them. But like the Secure by Design initiative, this framework is also non-binding. A software manufacturer’s timeline for adopting secure by design principles will depend on its appetite, resources and the complexity of its products. But the more demand from government and consumers, the more likely adoption will happen. Right now, CISA has no plans to track adoption. “We're more focused on collaborating with industry so that we can understand best practices and recommend further better guidelines,” says Zabierek.


Mastering the art of motivation

Once you’ve helped employees connect their dots, the best way to further motivate them is also the cheapest, easiest, and has the fewest unintended consequences. Compliment them on a job well done, whenever they’ve done a job well enough to be worth noting. Sure, there are wrong ways to use compliments as motivators. First and foremost the employee you’re complimenting must value your opinion. If they don’t they’ll write off your compliment as just so much noise. Second, a compliment from you should not be an easy compliment to earn. “I really like your belt,” isn’t going to inspire someone to work inventively and late. Third, with few exceptions compliments should be public. There’s little reason for you to be embarrassed about being pleased with someone’s efforts. With one caveat: Usually you’ll have one or two in your organization who routinely perform exceptionally well, but also one or two who are plodders — good enough and steady enough to keep around; not good enough or steady enough to earn your praise. Find a way to compliment them in public anyway — perhaps because you prize their reliability and lack of temperament.


Do you need GPUs for generative AI systems?

GPUs greatly enhance performance, but they do so at a significant cost. Also, for those of you tracking carbon points, GPUs consume notable amounts of electricity and generate considerable heat. Do the performance gains justify the cost? CPUs are the most common type of processors in computers. They are everywhere, including in whatever you’re using to read this article. CPUs can perform a wide variety of tasks, and they have a smaller number of cores compared to GPUs. However, they have sophisticated control units and can execute a wide range of instructions. This versatility means they can handle AI workloads, such as use cases that need to leverage any kind of AI, including generative AI. CPUs can prototype new neural network architectures or test algorithms. They can be adequate for running smaller or less complex models. This is what many businesses are building right now (and will be for some time) and CPUs are sufficient for the use cases I’m currently hearing about. CPUs are more cost-effective in terms of initial investment and power consumption for smaller organizations or individuals who have limited resources. 


How to create an AI team and train your other workers

Building an genAI team requires a holistic approach, according to Jayaprakash Nair head of Machine Learning, AI and Visualization at Altimetrik, a digital engineering services provider. To reduce the risk of failure, organizations should begin by setting the foundation for quality data, establish “a single source of truth strategy,” and define business objectives. Building a team that includes diverse roles such as data scientists, machine learning engineers, data engineers, domain experts, project managers, and ethicists/legal advisors is also critical, he said. “Each role will contribute unique expertise and perspectives, which is essential for effective and responsible implementation,” Nair said. "Management must work to foster collaboration among these roles, help align each function with business goals, and also incorporate ethical and legal guidance to ensure that projects adhere to industry guidelines and regulations." ... It's also important to look for people who like learning new technology, have a good business sense, and understand how the technology can benefit the company.


Data is the missing piece of the AI puzzle. Here's how to fill the gap

Companies looking to make progress in AI, says Labovich, must "strike a balance and acknowledge the significant role of unstructured data in the advancement of gen AI." Sharma agrees with these sentiments: "It is not necessarily true that organizations must use gen AI on top of structured data to solve highly complex problems. Oftentimes the simplest applications can lead to the greatest savings in terms of efficiency." The wide variety of data that AI requires can be a vexing piece of the puzzle. For example, data at the edge is becoming a major source for large language models and repositories. "There will be significant growth of data at the edge as AI continues to evolve and organizations continue to innovate around their digital transformation to grow revenue and profits," says Bruce Kornfeld, chief marketing and product officer at StorMagic. Currently, he continues, "there is too much data in too many different formats, which is causing an influx of internal strife as companies struggle to determine what is business-critical versus what can be archived or removed from their data sets."


3 ways to combat rising OAuth SaaS attacks

At their core, OAuth integrations are cloud apps that can access data on behalf of a user, with a defined permission set. When a Microsoft 365 user installs a MailMerge app to their Word, for example, they have essentially created a service principal for the app and granted it an extensive permission set with read/write access, the ability to save and delete files, as well as the ability to access multiple documents to facilitate the mail merge. The organization needs to implement an application control process for OAuth apps and determine if the application, like in the example above, is approved or not. ... Security teams should view user security through two separate lenses. The first is the way they access the applications. Apps should be configured to require multi-factor authentication (MFA) and single sign-on (SSO). ... Automated tools should scan the logs and report whenever an OAuth-integrated application is acting suspiciously. For example, applications that display unusual access patterns or geographical abnormalities should be regarded as suspicious. 


Cloud cost optimisation: Strategies for managing cloud expenses and maximising ROI

Instead of employing manual resources, streamlining cloud optimisation through automation could bring enhanced resource savings to the table. The auto-scaling program offered by Amazon Web Services (AWS) is a shining example of how firms can effectively streamline their cloud optimisation in a short time. The program also enables swift optimisation in response to the changing resource requirements of systems and servers. ... At the planning stage, firms need to justify the cloud budget and ensure that unexpected spending is reduced to the minimum. The same approach has to be followed in the building, deployment, and control phases so that any unexpected rise in budgets can be adjusted promptly without throwing the entire financial control into a tizzy. All these steps will help organisations develop a culture of cost-conscious cloud adoption and help them perform optimally while keeping costs in check. ... Incorporating cloud cost optimisation tools is a strategic approach for organisations to streamline expenditures and enhance ROI. 


Pull Requests and Tech Debt

The biggest disadvantage of pull requests is understanding the context of the change, technical or business context: you see what has changed without necessarily explaining why the change occurred. Almost universally, engineers review pull requests in the browser and do their best to understand what’s happening, relying on their understanding of tech stack, architecture, business domains, etc. While some have the background necessary to mentally grasp the overall impact of the change, for others, it’s guesswork, assumptions, and leaps of faith….which only gets worse as the complexity and size of the pull request increases. [Recently a friend said he reviewed all pull requests in his IDE, greatly surprising me: first I’ve heard of such diligence. While noble, that thoroughness becomes a substantial time commitment unless that’s your primary responsibility. Only when absolutely necessary do I do this. Not sure how he pulls it off!] Other than those good samaritans, mostly what you’re doing is static code analysis: within the change in front of you, what has changed, and does it make sense? You can look for similar changes, emerging patterns that might drive refactoring, best practices, or others doing similar.



Quote for the day:

"All leadership takes place through the communication of ideas to the minds of others." -- Charles Cooley