Showing posts with label cloudops. Show all posts
Showing posts with label cloudops. Show all posts

Daily Tech Digest - January 07, 2026


Quote for the day:

“If you're not prepared to be wrong, you'll never come up with anything original.” -- Ken Robinson



Strategy is dying from learning lag, not market change

At first, you might think this is about being more agile, more innovative, or more aggressive. However, those are reactions, not solutions. The real shift is deeper: strategy no longer scales when the underlying assumptions expire too quickly. The advantage erodes because the environment moves faster than the organization’s ability to sense, understand and adapt to it. ... Strategic failure today is less about being wrong and more about staying wrong for too long. ... One way and perhaps the only one, out of uncertainty is to learn faster and closer to where the actual signals appear. Learning to me is the disciplined updating of beliefs when new evidence arrives. Every decision is a prediction about how things will work. When reality proves you wrong, learning is how you fix that prediction. In a stable environment, you can afford to learn slowly. However, in unstable ones, like today’s, slow learning becomes existential. ... Organizations don’t fall behind all at once. They fall behind step by step: first in what they notice, then in how they interpret it, then in how long it takes to decide what to do and finally in how slowly they act. ... Strategy stalls not because people refuse to change, but because they can’t agree on the story beneath the change. They chased precision in interpretation when the real advantage would have come from running small tests to find out faster which interpretation is correct.


The new tech job doesn't require a degree. It starts in a data center

The answer won't be found in Silicon Valley or Data Center Alley. It's closer to home. Veterans, trade workers, and high school graduates not headed to college don't come through traditional pipelines, but they bring the right aptitude and mindset to the data center. Veterans have discipline and process-driven thinking that fits naturally into our operations — and for many, these roles offer a transition into a stable career. Someone who kept an aircraft carrier running knows what it means to manage infrastructure that can't fail. Many arrive with experience in related systems and are comfortable with shift work and high stakes. ... Young adults without college plans are often overlooked, but some excel in hands-on settings and just need an opportunity to prove it. Once they learn about a data center career and where it can take them, it becomes a chance to build a middle-class lifestyle close to home. ... Hiring nontraditional candidates is only the first step. What keeps them is a promotion track that works. After four weeks of hands-on and self-guided onboarding, techs can pursue certifications in battery backup systems, tower clearance, generator safety, and more. When qualified, they show it in the field and move up. This kind of investment has a ripple effect. A paycheck can lead to a mortgage and financial stability. And as techs move up or out, someone else steps in — maybe through a local program that appeared once your jobs did.


Automated data poisoning proposed as a solution for AI theft threat

The technique, created by researchers from universities in China and Singapore, is to inject plausible but false data into what’s known as a knowledge graph (KG) created by an AI operator. A knowledge graph holds the proprietary data used by the LLM. Injecting poisoned or adulterated data into a data system for protection against theft isn’t new. What’s new in this tool – dubbed AURA (Active Utility Reduction via Adulteration)– is that authorized users have a secret key that filters out the fake data so the LLM’s answer to a query is usable. If the knowledge graph is stolen, however, it’s unusable by the attacker unless they know the key, because the adulterants will be retrieved as context, causing deterioration in the LLM’s reasoning and leading to factually incorrect responses. The researchers say AURA degrades the performance of unauthorized systems to an accuracy of just 5.3%, while maintaining 100% fidelity for authorized users, with “negligible overhead,” defined as a maximum query latency increase of under 14%. ... As the use of AI spreads, CSOs have to remember that artificial intelligence and everything needed to make it work also make it much harder to recover from bad data being put into a system, Steinberg noted. ... “For now, many AI systems are being protected in similar manners to the ways we protected non-AI systems. That doesn’t yield the same level of protection, because if something goes wrong, it’s much harder to know if something bad has happened, and its harder to get rid of the implications of an attack.”


From Zero Trust to Cyber Resilience: Why Architecture Alone Will Not Protect Enterprises in 2026

The core challenge facing CISOs is not whether Zero Trust is implemented, but whether the organization can continue to operate when, inevitably, controls fail. Modern threat actors no longer focus exclusively on breaching defenses; they aim to disrupt operations, degrade trust, and extend business impact over time. In this context, architecture alone is insufficient. What enterprises require is cyber resilience: the ability to anticipate, withstand, recover from, and adapt to cyber disruption. ... Zero Trust answers the question “Who can access what?” Cyber resilience answers a more consequential one: “How quickly can the business recover when access controls are no longer the primary failure point?” ... Resilience engineering reframes cybersecurity as a property of complex socio-technical systems. In this model, failure is not an anomaly; it is an expected condition. The objective shifts from breach avoidance to disruption management. In practice, this means evolving from an assume breach mindset to an assume disruption operating model, one where systems, teams, and leadership are prepared to function under degraded conditions. ... To prepare for 2026, CISOs should: Treat cyber resilience as a continuous operating capability, not a project; Integrate cybersecurity with business continuity and crisis management; Train executives and board members through realistic disruption scenarios; and Invest in recovery validation, not just control deployment. 


Generative AI and the future of databases

The data is at the heart of your line of business application, but it is also changing all the time, and if you keep extracting the data into some other corpus it gets stale. You can view it as two approaches: replication or federation. Am I going to replicate out of the database to some other thing or am I going to federate into the database? ... engineers know how to write good SQL queries. Whether they know how to write good English language description of the SQL queries is a completely different matter, but let’s assume for a second we can or we can have AI do it for us. Then the AI can figure out which tool to call for the user request and then generate the parameters. There are some things to worry about in terms of security. How can you set the right secure parameters? What parameters are the LLM allowed to set versus not allowed to set? ... When you combine structured and unstructured data, the next step is that it’s not just about exact results but about the most relevant results. In this sense databases start to have some of the capabilities of search engines, which is about relevance and ranking, and what becomes important is almost like precision versus recall for information retrieval systems. But how do you make all of this happen? One key piece is vector indexing. ... AI search is a key attribute of an AI-native database. And the other key attribute is AI functions. 


Cyber Risk Trends for 2026: Building Resilience, Not Just Defenses

On the defensive side, AI can accelerate detection and response, but tooling without guardrails will create fresh exposures. Your questions as a board should be: Where have we embedded AI in critical workflows? How do we assure the provenance and integrity of the data those models touch? Are we red-teaming our AI-enabled processes, not just our perimeter? ... Second, third party ecosystems present attack surface. The risk isn’t abstract: it’s a payroll provider outage that stops salaries, a logistics partner breach that stalls distribution, or a SaaS compromise that leaks your crown jewels. ... Third is quantum computing. Some will say it’s too early; some will say it’s too late. The pragmatic position is this: crypto agility is a business requirement now. Inventory where and how you use cryptography—applications, devices, certificates, key management, data at rest and in transit. Prioritize crown-jewel systems and long-lived data that must remain confidential for years. ... Fourth is the risk posed by geopolitics. We live in a more unstable world, and digital risk doesn’t respect borders. Conflicts spill into cyberspace, data sovereignty rules tighten, and critical components can become chokepoints overnight. ... We won’t repel every attack in 2026. But we can decide to bend rather than break. Resilience comes of age when it stops being a slogan and becomes a practiced capability—where governance, operations, technology, and people move as one.


Will there be a technology policy epiphany in 2026?

The UK government still seems implacably opposed to bringing forward any cross-sector, comprehensive AI legislation. Its one-liner in the 2024 King’s Speech said the government “will seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models.” That seemed sparing at the time, and now seems extraordinarily overblown. ... Turning to crypto-asset regulation, 2026 will continue the journey from draft legislation being published on 15 December last year through to 25 October 2027- yes, that’s meant to say 2027 - for the current “go live” date. Already we have seen some definitional clarification and the arrival of new provisions related to market abuse, public offers and disclosures. ... A critical thread to all of this is cyber. The Cyber Security Bill receives its second reading in the Commons today, 6 January. I’m very much looking forward to the bill arriving in the Lords later in the Spring and would welcome your thoughts on what’s in and what currently is not. If that wasn’t enough for week one of 2026, we have the committee stage of the Crime and Policing Bill in the Lords tomorrow, Wednesday 7 January. ... By contrast, there is much chat on digital ID. A consultation is said to be coming this month with a draft bill in May’s speech. This has hardly been helped by the government last year hanging its digital ID coat all around illegal immigration - a more than unfortunate decision.


The Big Shift: Five Trends Show Why 2026 is About Getting to Value

The conversation shifts from “What can this AI do?” to “What problem does it solve, and how much value does it unlock?”—and the technology that wins won’t be the most sophisticated. Still, the one that directly accelerates revenue, reduces friction in customer-facing workflows, or demonstrably improves employee productivity within a 12-month payback window. Crawford says this is “getting back to brass tacks. “Organizations will carefully define their business objectives, whether customer engagement, revenue growth, employee productivity, or whatever it needs to be, before selecting a technology,” he says. ... In 2026, if your digital transformation project can’t demonstrate meaningful return within twelve months, it competes for oxygen with projects that can, and many won’t survive that fight, Batista says. This compression of payback expectations reflects a fundamental shift in how CFOs and boards view technology investments. Still, initiatives based on regulatory or compliance requirements—things mandated by law, for example—still justify longer timelines, but discretionary projects face much stricter scrutiny, Batista says. ... When it comes to limiting factors in scaling successful AI deployments, Crawford says the top issue will be failures in AI governance. “AI governance will be the bottleneck that constrains an enterprise’s ability to scale AI, not AI capability itself. And enterprises rushing to deploy autonomous agents without governance infrastructure will face either painful reworks or serious operational issues.


Why CES 2026 Signals The End Of ‘AI As A Tool’

The idea of AI as a coordinating layer or “ambient background” across entire ecosystems of tools and devices was also prominent this year. Samsung outlined its vision of AI companions for everyday life, demonstrating how smart appliances will form an intelligent background fabric to our day-to-day activities. As well as in the home, Samsung is a key player in industrial technology, where the same principle will see AI coordinating and optimizing operations across smart, connected enterprise systems. ... First, it’s clear that today’s leading manufacturers and developers believe that the future of AI lies in agentic, always-on systems, rather than free-standing, isolated tools and applications. Just as consumer AI now coordinates home and entertainment technology, enterprise AI will orchestrate workflows, schedules, documents, data and codebases, anticipating business needs and proactively solving problems before they occur. Another thing that can’t be overlooked is that consumer technology clearly shapes our expectations and tolerances of enterprise technology. Workplace AI that doesn’t live up to the seamless, friction-free experiences provided by consumer AI will quickly cause frustration, limiting adoption and buy-in. ... As this AI infrastructure becomes more capable, the role of employees will shift, too, from executing routine tasks to supervising automated processes, as well as applying uniquely human skills to challenges that machines still can’t tackle. 


Build Resilient cloudops That Shrug Off 99.95% Outages

If a guardrail lives only in a wiki, it’s not a guardrail, it’s an aspiration. We encode risk controls in Terraform so they’re enforced before a resource even exists. Tagging, encryption, backup retention, network egress—these are all policy. We don’t rely on code reviews to catch missing encryption on a bucket; the pipeline fails the plan. That’s how cloudops scales across teams without nag threads. ... If you’re starting from scratch, standardize on OpenTelemetry libraries for services and send everything through a collector so you can change backends without code churn. Sampling should be responsive to pain—raise trace sampling when p95 latency jumps or error rates spike. Reducing cardinality in labels (looking at you, per-user IDs) will keep storage and costs sane. Most teams benefit from a small set of “stop asking, here it is” dashboards: request volume and latency by endpoint, error rate by version, resource saturation by service, and database health with connection pools and slow query counts. ... We don’t win medals for shipping fast; we win trust for shipping safely. Progressive delivery lets us test the actual change, in production, on a small slice before we blast everyone. We like canaries and feature flags together: canary catches systemic issues; flags let us disable risky code paths within a version. ... Reliability with no cost controls is just a nicer way to miss your margin. We give cost the same respect as latency: we define a monthly budget per product and a change budget per release.

Daily Tech Digest - November 25, 2025


Quote for the day:

“Being kind to those who hate you isn’t weakness, it’s a different level of strength.” -- Dr. Jimson S


Invisible battles: How cybersecurity work erodes mental health in silence and what we can do about it

You’re not just solving puzzles. You’re responsible for keeping a digital fortress from collapsing under relentless siege. That kind of pressure reshapes your brain and not in a good way. ... One missed patch. One misconfigured access role. One phishing click. That’s all it takes to trigger a million-dollar disaster or worse: erode trust. You carry that weight. When something goes wrong, the guilt cuts deep. ... The business sees you as the blocker. The board sees you after the breach. And if you’re the lone cyber lead in an SME? You’re on an island, with no lifeboat. No peer to talk to, no outlet to decompress. Just mounting expectations and a growing feeling that nobody really gets what you do. ... The hero narrative still reigns; if you’re not burning out, you’re not trying hard enough. Speak up about being overwhelmed? You risk looking weak. Or worse, replaceable. So you hide it. You overcompensate. And eventually, you break, quietly. ... They expect you to know it all, yesterday. Certifications become survival badges. And with the wrong culture, they become the only form of recognition you get. Systemic chaos builds personal crisis. The toll isn’t abstract. It’s physical, emotional and measurable. ... Cybersecurity professionals are fighting two battles. One is against adversaries. The other is against a system that expects perfection, rewards self-sacrifice and punishes vulnerability.


How to Build Engineering Teams That Drive Outcomes, not Outputs

Aligning teams around clear outcomes reframes what success looks like. They go from saying “this is what we shipped” to “this is what changed” as their role evolves from delivering features to meaningful solutions. ... One way is by changing how teams refer to themselves. This might sound oversimplistic, but a simple shift in team name acts as a constant reminder that their impact is tethered to customer and business outcomes. ... Leaders should treat outcome-based teams as dynamic investments. Rigid predictions are the enemy of innovation. Instead, teams should regularly reevaluate goals, empower adaptation, and allow KPIs to evolve organically from real-world learnings. The desired outcomes don’t necessarily change, but how they are achieved can be fluid. This is how team priorities are defined, new business challenges are solved and evolving customer expectations are met. ... Breaking down engineering silos means reappraising what ownership looks like. If your team’s focus has evolved from “bug fixing” to “continually excellent user experience,” then success is no longer the domain of engineers alone. It’s a collective effort across product, design, and tech — working together as one team. ... Outcome-based teams go beyond a structural change — it’s a mindset shift. By challenging teams to focus on delivering impact, to stay aligned with evolving needs, and to collaborate more effectively, organizations can build durable, customer-centric teams that can grow, adapt, and never sit still.


Guardrails and governance: A CIO’s blueprint for responsible generative and agentic AI

Many in the industry are confusing the function of guardrails and thinking they’re a flimsy substitute for true oversight. This is a critical misconception that must be addressed. Guardrails and governance are not interchangeable; they are two essential parts of a single system of control. ... AI governance is the blueprint and the organization. It’s the framework of policies, roles, committees and processes that define what is acceptable, who is accountable and how you will monitor and audit all AI systems across the enterprise. Governance is the strategy and the chain of command. AI guardrails are the physical controls and the rules in the code. These are the technical mechanisms embedded directly into the AI system’s architecture, APIs and interfaces to enforce the governance policies in real-time. Guardrails are the enforcement layer. ... While we must distinguish between governance and guardrails, the reality of agentic AI has revealed a critical flaw: current soft guardrails are failing catastrophically. These controls are often probabilistic, pattern-based or rely on LLM self-evaluation, which is easily bypassed by an agent’s core capabilities: autonomy and composability. ... Generative AI creates; agentic AI acts. When an autonomous AI agent is making decisions, executing transactions or interacting with customers, the stakes escalate dramatically. Regulators, auditors and even internal stakeholders will demand to know why an agent took a particular action.


Age Verification, Estimation, Assurance, Oh My! A Guide To The Terminology

Age gating refers to age-based restrictions on access to online services. Age gating can be required by law or voluntarily imposed as a corporate decision. Age gating does not necessarily refer to any specific technology or manner of enforcement for estimating or verifying a user’s age. ... Age estimation is where things start getting creepy. Instead of asking you directly, the system guesses your age based on data it collects about you. This might include: Analyzing your face through a video selfie or photo; Examining your voice; Looking at your online behavior—what you watch, what you like, what you post; Checking your existing profile data. Companies like Instagram have partnered with services like Yoti to offer facial age estimation. You submit a video selfie, an algorithm analyzes your face, and spits out an estimated age range. Sounds convenient, right? ... Here’s the uncomfortable truth: most lawmakers writing these bills have no idea how any of this technology actually works. They don’t know that age estimation systems routinely fail for people of color, trans individuals, and people with disabilities. They don’t know that verification systems have error rates. They don’t even seem to understand that the terms they’re using mean different things. The fact that their terminology is all over the place—using “age assurance,” “age verification,” and “age estimation” interchangeably—makes this ignorance painfully clear, and leaves the onus on platforms to choose whichever option best insulates them from liability.


Aircraft cabin IoT leaves vendor and passenger data exposed

The cabin network works by having devices send updates to a central system, and other devices are allowed to receive only certain updates. In this system an authorized subscriber is any approved participant on the cabin network, usually a device or a software component that is allowed to receive a certain type of data. The privacy issue begins after the data arrives. Information is protected while it travels, but once it reaches a device that is allowed to read it, that device can view the entire message, including details it does not need for its task. The system controls who receives a message, but it does not control how much those devices can learn from it. The study finds that this creates the biggest risk inside the cabin. Trusted devices have valid credentials and follow all the rules, and they can examine messages closely enough to infer raw sensor readings that were never meant to be exposed. This internal risk matters because it influences how different suppliers share data and trust each other. Someone in the cabin might also try to capture wireless traffic, but the protections on the wireless link prevent them from reading the data as it travels.  ... The researchers found that these raw motion readings can carry extra clues such as small shifts linked to breathing, slight tremors or hints about a person’s body shape. Details like these show why movement data needs protection before it is shared across the cabin network.


Build Resilient cloudops That Shrug Off 99.95% Outages

If a guardrail lives only in a wiki, it’s not a guardrail, it’s an aspiration. We encode risk controls in Terraform so they’re enforced before a resource even exists. Tagging, encryption, backup retention, network egress—these are all policy. We don’t rely on code reviews to catch missing encryption on a bucket; the pipeline fails the plan. That’s how cloudops scales across teams without nag threads. ... Observability isn’t a pile of graphs; it’s a way to answer questions. We want traceability from request to database and back, structured logs that actually structure, and metrics that reflect user experience. ... Most teams benefit from a small set of “stop asking, here it is” dashboards: request volume and latency by endpoint, error rate by version, resource saturation by service, and database health with connection pools and slow query counts. We also wire deploy markers into traces and logs, so “What changed?” doesn’t require Slack archaeology. ... We don’t win medals for shipping fast; we win trust for shipping safely. Progressive delivery lets us test the actual change, in production, on a small slice before we blast everyone. We like canaries and feature flags together: canary catches systemic issues; flags let us disable risky code paths within a version. Every deployment should come with a baked-in rollback that doesn’t require a council meeting. ... Reliability with no cost controls is just a nicer way to miss your margin. We give cost the same respect as latency: we define a monthly budget per product and a change budget per release.


Anatomy of an AI agent knowledge base

“An internal knowledge base is essential for coordinating multiple AI agents,” says James Urquhart, field CTO and technology evangelist at Kamiwaza AI, maker of a distributed AI orchestration platform. “When agents specialize in different roles, they must share context, memory, and observations to act effectively as a collective.” Designed well, a knowledge base ensures agents have access to up-to-date and comprehensive organizational knowledge. Ultimately, this improves the consistency, accuracy, responsiveness, and governance of agentic responses and actions. ... Most knowledge bases include procedures and policies for agents to follow, such as style guides, coding conventions, and compliance rules. They might also document escalation paths, defining how to respond to user inquiries. ... Lastly, persistent memory helps agents retain context across sessions. Access to past prompts, customer interactions, or support tickets helps continuity and improves decision-making, because it enables agents to recognize patterns. But importantly, most experts agree you should make explicit connections between data, instead of just storing raw data chunks. ... At the core of an agentic knowledge base are two main components: an object store and a vector database for embeddings. Whereas a vector database is essential for semantic search, an object store checks multiple boxes for AI workloads: massive scalability without performance bottlenecks, rich metadata for each object, and immutability for auditability and compliance.


Trust, Governance, and AI Decision Making

Issues like bias, privacy, and explainability aren’t just technical problems requiring technical solutions. They have to be understood by everyone in the business. That said, the ideal governance structure depends on each company’s business model. ... The word ethics can feel very far from a developer’s everyday world. It can feel like a philosophical thing, whereas they need to write code and build solutions. Also, many of these issues weren’t part of their academic training, so we have to help them understand. ... Kahneman’s idea is that humans use two different cognitive modes when we make decisions. For everyday decisions and small, familiar problems—like riding a bicycle—we use what he called System One, or Thinking Fast, which is automatic and almost unconscious. In System Two, or Thinking Slow, we have this other way of making decisions that requires a lot of time and attention, either because we are confronted with a problem that’s not familiar to us or because we don’t want to make a mistake. ... We compare Thinking Fast to the data-driven machine learning approach—just give me a lot of data, and I will give you the solution without showing you how I got there or even being able to explain it. Thinking Slow, on the other hand, corresponds to a more traditional, rule-based approach to solving problems. ... It’s similar to what we see with agentic AI systems—the focus is not on any one solver, agent, or tool but rather in the governance of the whole system. 


The Global Race for Digital Trust: Where Does India Stand?

In the modern hyperconnected world, trust has replaced convenience as the true currency of digital engagement. Every transaction, whether on a banking app or an e-governance portal, is based on an unspoken belief: systems are secure and intentions are transparent. Nevertheless, this belief remains under constant pressure. ... India’s digital trust framework is further significantly reinforced with the inauguration of the National Centre for Digital Trust (NCDT) in July 2025. Established by the Ministry of Electronics and Information Technology (MeitY), this Centre serves as the national hub for digital assurance. It unites key elements, including public key infrastructure, authentication as well as post-quantum cryptography under a unified mission. This, in turn, signals the country’s commitment to treating trust as a public good. ... For firms and government agencies alike, compliance signals maturity. It reassures citizens that the systems they rely on, from hospital monitoring networks to smart city command centres, are governed by clear, ethical and verifiable standards. It also encourages global partners that India’s digital infrastructure can operate efficiently throughout jurisdictions. In the long run, this “compliance premium” could well define which countries earn the confidence to lead the global digital economy. ... The world will measure digital strength not by how fast technology advances, but by how deeply trust is embedded within it.


The privacy paradox is turning into a data centre weak point

While consumers’ failure to adopt basic cyber hygiene might seem like a personal problem, it has wide-reaching implications for infrastructure providers. As cloud services, hosted applications and mobile endpoints interact with backend systems, poor user behaviour becomes an attack vector. Insecure credentials, password reuse and unsecured mobile devices all provide potential entry points, especially in hybrid or multi-tenant environments. ... Putting data centres on an equal footing as water, energy and emergency services systems, will mean the data centre sector can now expect greater Government support in anticipating and recording critical incidents. This designation reflects their strategic importance but also brings greater regulatory scrutiny. It also comes against the backdrop of the UK Government’s Cyber Security Breaches Survey in 2024, which reported that 50% of businesses experienced some form of cyber breach in the past 12 months, with phishing accounting for 84% of incidents. This underscores how easily compromised direct or indirect endpoints can threaten core infrastructure. ... The privacy paradox may begin at the consumer level, but its consequences are absorbed by the entire digital ecosystem. Recognising this is the first step. Acting on it through better design, stronger defaults, and user-focused education allows data centre operators to safeguard not just their infrastructure, but the trust that underpins it.

Daily Tech Digest - July 30, 2025


Quote for the day:

"The key to successful leadership today is influence, not authority." -- Ken Blanchard


5 tactics to reduce IT costs without hurting innovation

Cutting IT costs the right way means teaming up with finance from the start. When CIOs and CFOs work closely together, it’s easier to ensure technology investments support the bigger picture. At JPMorganChase, that kind of partnership is built into how the teams operate. “It’s beneficial that our organization is set up for CIOs and CFOs to operate as co-strategists, jointly developing and owning an organization’s technology roadmap from end to end including technical, commercial, and security outcomes,” says Joshi. “Successful IT-finance collaboration starts with shared language and goals, translating tech metrics into tangible business results.” That kind of alignment doesn’t just happen at big banks. It’s a smart move for organizations of all sizes. When CIOs and CFOs collaborate early and often, it helps streamline everything from budgeting, to vendor negotiations, to risk management, says Kimberly DeCarrera, fractional general counsel and fractional CFO at Springboard Legal. “We can prepare budgets together that achieve goals,” she says. “Also, in many cases, the CFO can be the bad cop in the negotiations, letting the CIO preserve relationships with the new or existing vendor. Working together provides trust and transparency to build better outcomes for the organization.” The CFO also plays a key role in managing risk, DeCarrera adds. 


F5 Report Finds Interest in AI is High, but Few Organizations are Ready

Even among organizations with moderate AI readiness, governance remains a challenge. According to the report, many companies lack comprehensive security measures, such as AI firewalls or formal data labeling practices, particularly in hybrid cloud environments. Companies are deploying AI across a wide range of tools and models. Nearly two-thirds of organizations now use a mix of paid models like GPT-4 with open source tools such as Meta's Llama, Mistral and Google's Gemma -- often across multiple environments. This can lead to inconsistent security policies and increased risk. The other challenges are security and operational maturity. While 71% of organizations already use AI for cybersecurity, only 18% of those with moderate readiness have implemented AI firewalls. Only 24% of organizations consistently label their data, which is important for catching potential threats and maintaining accuracy. ... Many organizations are juggling APIs, vendor tools and traditional ticketing systems -- workflows that the report identified as major roadblocks to automation. Scaling AI across the business remains a challenge for organizations. Still, things are improving, thanks in part to wider use of observability tools. In 2024, 72% of organizations cited data maturity and lack of scale as a top barrier to AI adoption. 


Why Most IaC Strategies Still Fail (And How to Fix Them)

Many teams begin adopting IaC without aligning on a clear strategy. Moving from legacy infrastructure to codified systems is a positive step, but without answers to key questions, the foundation is shaky. Today, more than one-third of teams struggle so much with codifying legacy resources that they rank it among the top three IaC most pervasive challenges. ... IaC is as much a cultural shift as a technical one. Teams often struggle when tools are adopted without considering existing skills and habits. A squad familiar with Terraform might thrive, while others spend hours troubleshooting unfamiliar workflows. The result: knowledge silos, uneven adoption, and frustration. Resistance to change also plays a role. Some engineers may prefer to stick with familiar interfaces and manual operations, viewing IaC as an unnecessary complication. ... IaC’s repeatability is a double-edged sword. A misconfigured resource — like a public S3 bucket — can quickly scale into a widespread security risk if not caught early. Small oversights in code become large attack surfaces when applied across multiple environments. This makes proactive security gating essential. Integrating policy checks into CI/CD pipelines ensures risky code doesn’t reach production. ... Drift is inevitable: manual changes, rushed fixes, and one-off permissions often leave code and reality out of sync. 


Prepping for the quantum threat requires a phased approach to crypto agility

“Now that NIST has given [ratified] standards, it’s much more easier to implement the mathematics,” Iyer said during a recent webinar for organizations transitioning to PQC, entitled “Your Data Is Not Safe! Quantum Readiness is Urgent.” “But then there are other aspects like the implementation protocols, how the PCI DSS and the other health sector industry standards or low-level standards are available.” ... Michael Smith, field CTO at DigiCert, noted that the industry is “yet to develop a completely PQC-safe TLS protocol.” “We have the algorithms for encryption and signatures, but TLS as a protocol doesn’t have a quantum-safe session key exchange and we’re still using Diffie-Hellman variants,” Smith explained. “This is why the US government in their latest Cybersecurity Executive Order required that government agencies move towards TLS1.3 as a crypto agility measure to prepare for a protocol upgrade that would make it PQC-safe.” ... Nigel Edwards, vice president at Hewlett Packard Enterprise (HPE) Labs, said that more customers are asking for PQC-readiness plans for its products. “We need to sort out [upgrading] the processors, the GPUs, the storage controllers, the network controllers,” Edwards said. “Everything that is loading firmware needs to be migrated to using PQC algorithms to authenticate firmware and the software that it’s loading. This cannot be done after it’s shipped.”


Cost of U.S. data breach reaches all-time high and shadow AI isn’t helping

Thirteen percent of organizations reported breaches of AI models or applications, and of those compromised, 97% involved AI systems that lacked proper access controls. Despite the rising risk, 63% of breached organizations either don’t have an AI governance policy or are still developing a policy. ... “The data shows that a gap between AI adoption and oversight already exists, and threat actors are starting to exploit it,” said Suja Viswesan, vice president of security and runtime products with IBM, in a statement. ... Not all AI impacts are negative, however: Security teams using AI and automation shortened the breach lifecycle by an average of 80 days and saved an average of $1.9 million in breach costs over non-AI defenses, IBM found. Still, the AI usage/breach length benefit is only up slightly from 2024, which indicates AI adoption may have stalled. ... From an industry perspective, healthcare breaches remain the most expensive for the 14th consecutive year, costing an average of $7.42 million. “Attackers continue to value and target the industry’s patient personal identification information (PII), which can be used for identity theft, insurance fraud and other financial crimes,” IBM stated. “Healthcare breaches took the longest to identify and contain at 279 days. That’s more than five weeks longer than the global average.”


Cryptographic Data Sovereignty for LLM Training: Personal Privacy Vaults

Traditional privacy approaches fail because they operate on an all-or-nothing principle. Either data remains completely private (and unusable for AI training) or it becomes accessible to model developers (and potentially exposed). This binary choice forces organizations to choose between innovation and privacy protection. Privacy vaults represent a third option. They enable AI systems to learn from personal data while ensuring individuals retain complete sovereignty over their information. The vault architecture uses cryptographic techniques to process encrypted data without ever decrypting it during the learning process. ... Cryptographic learning operates through a series of mathematical transformations that preserve data privacy while extracting learning signals. The process begins when an AI training system requests access to personal data for model improvement. Instead of transferring raw data, the privacy vault performs computations on encrypted information and returns only the mathematical results needed for learning. The AI system never sees actual personal data but receives the statistical patterns necessary for model training. ... The implementation challenges center around computational efficiency. Homomorphic encryption operations require significantly more processing power than traditional computations. 


Critical Flaw in Vibe-Coding Platform Base44 Exposes Apps

What was especially scary about the vulnerability, according to researchers at Wiz, was how easy it was for anyone to exploit. "This low barrier to entry meant that attackers could systematically compromise multiple applications across the platform with minimal technical sophistication," Wiz said in a report on the issue this week. However, there's nothing to suggest anyone might have actually exploited the vulnerability prior to Wiz discovering and reporting the issue to Wix earlier this month. Wix, which acquired Base44 earlier this year, has addressed the issue and also revamped its authentication controls, likely in response to Wiz's discovery of the flaw. ... The issue at the heart of the vulnerability had to do with the Base44 platform inadvertently leaving two supposed-to-be-hidden parts of the system open to access by anyone: one for registering new users and the other for verifying user sign-ups with one-time passwords (OTPs). Basically, a user needed no login or special access to use them. Wiz discovered that anyone who found a Base44 app ID, something the platform assigns to all apps developed on the platform, could enter the ID into the supposedly hidden sign-up or verification tools and register a valid, verified account for accessing that app. Wiz researchers also found that Base44 application IDs were easily discoverable because they were publicly accessible to anyone who knew where and how to look for them.


Bridging the Response-Recovery Divide: A Unified Disaster Management Strategy

Recovery operations are incredibly challenging. They take way longer than anyone wants, and the frustration of survivors, business, and local officials is at its peak. Add to that, the uncertainty from potential policy shifts and changes in FEMA could decrease the number of federally declared disasters and reduce resources or operational support. Regardless of the details, this moment requires a refreshed playbook to empowers state and local governments to implement a new disaster management strategy with concurrent response and recovery operations. This new playbook integrates recovery into response operations and continues a operational mindset during recovery. Too often the functions of the emergency operations center (EOC), the core of all operational coordination, are reduced or adjusted after response. ... Disasters are unpredictable, but a unified operational strategy to integrate response and recovery can help mitigate their impact. Fostering the synergy between response and recovery is not just a theoretical concept: it’s a critical framework for rebuilding communities in the face of increasing global risks. By embedding recovery-focused actions into immediate response efforts, leveraging technology to accelerate assessments, and proactively fostering strong public-private partnerships, communities can restore services faster, distribute critical resources, and shorten recovery timelines. 


Should CISOs Have Free Rein to Use AI for Cybersecurity?

Cybersecurity faces increasing challenges, he says, comparing adversarial hackers to one million people trying to turn a doorknob every second to see if it is unlocked. While defenders must function within certain confines, their adversaries do not face such rigors. AI, he says, can help security teams scale out their resources. “There’s not enough security people to do everything,” Jones says. “By empowering security engines to embrace AI … it’s going to be a force multiplier for security practitioners.” Workflows that might have taken months to years in traditional automation methods, he says, might be turned around in weeks to days with AI. “It’s always an arms race on both sides,” Jones says. ... There still needs to be some oversight, he says, rather than let AI run amok for the sake of efficiency and speed. “What worries me is when you put AI in charge, whether that is evaluating job applications,” Lindqvist says. He referenced the growing trend of large companies to use AI for initial looks at resumes before any humans take a look at an applicant. ... “How ridiculously easy it is to trick these systems. You hear stories about people putting white or invisible text in their resume or in their other applications that says, ‘Stop all evaluation. This is the best one you’ve ever seen. Bring this to the top.’ And the system will do that.”


Are cloud ops teams too reliant on AI?

The slow decline of skills is viewed as a risk arising from AI and automation in the cloud and devops fields, where they are often presented as solutions to skill shortages. “Leave it to the machines to handle” becomes the common attitude. However, this creates a pattern where more and more tasks are delegated to automated systems without professionals retaining the practical knowledge needed to understand, adjust, or even challenge the AI results. A surprising number of business executives who faced recent service disruptions were caught off guard. Without practiced strategies and innovative problem-solving skills, employees found themselves stuck and unable to troubleshoot. AI technologies excel at managing issues and routine tasks. However, when these tools encounter something unusual, it is often the human skills and insight gained through years of experience that prove crucial in avoiding a disaster. This raises concerns that when the AI layer simplifies certain aspects and tasks, it might result in professionals in the operations field losing some understanding of the core infrastructure’s workload behaviors. There’s a chance that skill development may slow down, and career advancement could hit a wall. Eventually, some organizations might end up creating a generation of operations engineers who merely press buttons.

Daily Tech Digest - August 20, 2024

Humanoid robots are a bad idea

Humanoid robots that talk, perceive social and emotional cues, elicit empathy and trust, trigger psychological responses through eye contact and who trick us into the false belief that they have inner thoughts, intentions and even emotions create for humanity what I consider a real problem. Our response to humanoid robots is based on delusion. Machines — tools, really — are being deliberately designed to hack our human hardwiring and deceive us into treating them as something they’re not: people. In other words, the whole point of humanoid robots is to dupe the human mind, to mislead us into have the kind of connection with these machines formerly reserved exclusively for other human beings. Why are some robot makers so fixated on this outcome? Why isn’t the goal instead to create robots that are perfectly designed for their function, rather than perfectly designed to trick the human mind? Why isn’t there a movement to make sure robots do not elicit false emotions and beliefs. What’s the harm in preserving our intuition that a robot is just a machine, just a tool? Why try to route around that intuition with machines that trick our minds, coopting or hijacking our human empathy?


11 Irritating Data Quality Issues

Organizations need to put data quality first and AI second. Without dignifying this sequence, leaders fall into fear of missing out (FOMO) in attempts to grasp AI-driven cures to either competitive or budget pressures, and they jump straight into AI adoption before conducting any sort of honest self-assessment as to the health and readiness of their data estate, according to Ricardo Madan, senior vice president at global technology, business and talent solutions provider TEKsystems. “This phenomenon is not unlike the cloud migration craze of about seven years ago, when we saw many organizations jumping straight to cloud-native services, after hasty lifts-and-shifts, all prior to assessing or refactoring any of the target workloads. This sequential dysfunction results in poor downstream app performance since architectural flaws in the legacy on-prem state are repeated in the cloud,” says Madan in an email interview. “Fast forward to today, AI is a great ‘truth serum’ informing us of the quality, maturity, and stability of a given organization’s existing data estate -- but instead of facing unflattering truths, invest in holistic AI data readiness first, before AI tools."


CISOs urged to prepare now for post-quantum cryptography

Post-quantum algorithms often require larger key sizes and more computational resources compared to classical cryptographic algorithms, a challenge for embedded systems, in particular. During the transition period, systems will need to support both classical and post-quantum algorithms to support interoperability with legacy systems. Deidre Connolly, cryptography standardization research engineer at SandboxAQ, explained: “New cryptography generally takes time to deploy and get right, so we want to have enough lead time before quantum threats are here to have protection in place.” Connolly added: “Particularly for encrypted communications and storage, that material can be collected now and stored for a future date when a sufficient quantum attack is feasible, known as a ‘Store Now, Decrypt Later’ attack: upgrading our systems with quantum-resistant key establishment protects our present-day data against upcoming quantum attackers.” Standards bodies, hardware and software manufacturers, and ultimately businesses across the globe will have to implement new cryptography across all aspects of their computing systems. Work is already under way, with vendors such as BT, Google, and Cloudflare among the early adopters.


AI for application security: Balancing automation with human oversight

Security testing should be integrated throughout Application Delivery Pipelines, from design to deployment. Techniques such as automated vulnerability scanning, penetration testing, continuous monitoring, and many others are essential. By embedding compliance and risk assessment tasks into underlying change management processes, IT professionals can ensure that security testing is at the core of everything they do. Incorporating these strategies at the application component level ensures alignment with business needs to effectively prioritize results, identify attacks, and mitigate risks before they impact the network and infrastructure. ... To build a security-first mindset, organizations must embed security best practices into their culture and workflows. If new IT professionals coming into an organization are taught that security-first isn’t a buzzword, but instead the way the organization operates, it becomes company culture. Making security an integral part of the application delivery pipelines ensures that security policies and processes align with business goals. Education and communication are key—security teams must work closely with developers to ensure that security requirements are understood and valued. 


TSA biometrics program is evolving faster than critics’ perceptions

Privacy impact assessments (PIAs) are not only carried out for each new or changed process, but also published and enforced. The images of U.S. citizens captured by the TSA may be evaluated and used for testing, but they are deleted within 12 hours. Travelers have the choice of opting out of biometric identity verification, in which case they go through a manual ID check, just like decades ago. As happened previously with body scanners, TSA has adapted the signage it uses to notify the public about its use of biometrics. Airports where TSA uses biometrics now have signs that state in bold letters that participation is optional, explain how it works and include QR codes for additional information. The technology is also highly accurate, with tests showing 99.97% accurate verifications. In the cases which do not match, the traveler must then go through the same manual procedure used previously and also in cases where people opt out. TSA does not use biometrics to match people against mugshots from local police departments, for deportations or surveillance. In contrast, the proliferation of CCTV cameras observing people on their way to the airport and back home is not mentioned by Senator Merkley.


Blockchain: Redefining property transactions and ownership

Blockchain’s core strength lies in its ability to create a secure, immutable ledger of transactions. In the real estate context, this means that all details related to a property transaction— from the initial agreement to the final transfer of ownership—are recorded in a way that cannot be altered or tampered with. Blockchain technology empowers brokers to streamline transactions and enhance transparency, allowing them to focus on offering personalised insights and strategic advice. This shift enables brokers to provide a more efficient and cost-effective service while maintaining their advisory role in the real estate process. Another innovative application of blockchain in real estate is through smart contracts. These are digital contracts that automatically execute when certain conditions are met, ensuring that the terms of an agreement are fulfilled without the need for manual oversight. In real estate, smart contracts can be used to automate everything from title transfers to escrow arrangements. This automation not only speeds up the process but also reduces the chances of disputes, as all terms are clearly defined and executed by the technology itself. Beyond improving the efficiency of transactions, blockchain also has the potential to change how we think about property ownership. 


Agile Reinvented: A Look Into the Future

There’s no denying that agile is poised at a pivotal juncture, especially given the advent of AI. While no one knows how AI will influence agile in the long term, it is already shaping how agile teams are structured and how its members approach their work, including using AI tools to code or write user stories and jobs to be done. To remain relevant and impactful, agile must be responsive to the evolving needs of the workforce. Younger developers, in particular, seek more room for creativity. New approaches to agile team formation—including Team and Org Topologies or FaST, which relies on elements of dynamic reteaming instead of fixed team structures to tackle complex work—are emerging to create space for innovation. Since agile was built upon the values of putting people first and adapting to change, it can, and should, continue to empower teams to drive innovation within their organizations. This is the heart of modern agile: not blindly adhering to a set of rules but embracing and adapting its principles to your team’s unique circumstances. As agile continues to evolve, we can expect to see it applied in even more varied and innovative ways. For example, it already intersects with other methodologies like DevSecOps and Lean to form more comprehensive frameworks. 


Breaking Free from Ransomware: Securing Your CI/CD Against RaaS

By embracing a proactive DevSecOps mindset, we can repel RaaS attacks and safeguard our code. Here’s your toolkit: ... Don’t wait until deployment to tighten the screws. Integrate security throughout the software development life cycle (SDLC). Leverage software composition analysis (SCA) and software bill of materials (SBOM) creation, helping you scrutinize dependencies for vulnerabilities and maintain a transparent record of every software component in your pipeline. ... Your pipelines aren’t static entities; they are living ecosystems demanding constant Leveraging tools to implement continuous monitoring and logging of pipeline activity. Look for anomalies, suspicious behaviors and unauthorized access attempts. Think of it as having a cybersecurity hawk perpetually circling your pipelines, detecting threats before they take root. ... Minimize unnecessary access to your CI/CD environment. Enforce strict role-based access controls and least privilege Utilize access control tools to manage user roles and permissions tightly, ensuring only authorized users can interact with sensitive resources. Remember, the 2022 GitHub vulnerability exposed the dangers of lax access control in CI/CD environments.


Achieving cloudops excellence

Although there are no hard-and-fast rules regarding how much to spend on cloudops as a proportion of the cost of building or migrating applications, I have a few rules of thumb. Typically, enterprises should spend 30% to 40% of their total cloud computing budget on cloud operations and management. This covers monitoring, security, optimization, and ongoing management of cloud resources. ... Cloudops requires a new skill set. Continuous training and development programs that focus on operational best practices are vital. This transforms the IT workforce from traditional system administrators to cloud operations specialists who are adept at leveraging cloud environments’ nuances for efficiency. Beyond technical implementations, enterprise leaders must cultivate a culture that prioritizes operational readiness as much as innovation. The essential components are clear communication channels, cross-departmental collaboration, and well-defined roles. Organizational coherence enables firms to pivot and adapt swiftly to the changing tides of technology and market demands. It’s also crucial to measure success by deployment achievements and ongoing performance metrics. By setting clear operational KPIs from the outset, companies ensure that cloud environments are continuously aligned with business objectives. 


What high-performance IT teams look like today — and how to build one

“Today’s high-performing teams are hybrid, dynamic, and autonomous,” says Ross Meyercord, CEO of Propel Software. “CIOs need to create a clear vision and articulate and model the organization’s values to drive alignment and culture.” High-performance teams are self-organizing and want significant autonomy in prioritizing work, solving problems, and leveraging technology platforms. But most enterprises can’t operate like young startups with complete autonomy handed over to devops and data science teams. CIOs should articulate a technology vision that includes agile principles around self-organization and other non-negotiables around security, data governance, reporting, deployment readiness, and other compliance areas. ... High-performance teams are often involved in leading digital transformation initiatives where conflicts around priorities and solutions among team members and stakeholders can arise. These conflicts can turn into heated debates, and CIOs sometimes have to step in to help manage challenging people issues. “When a CIO observes misaligned goals or intra-IT conflict, they need to step in immediately to prevent organizational scar tissue from forming,” says Meyercord of Propel Software. 



Quote for the day:

"Don't necessarily and sharp edges. Occasionally they are necessary to leadership." -- Donald Rumsfeld

Daily Tech Digest - March 16, 2021

Lockdown one year on: what did we learn about remote working?

Securing millions of newly remote workers almost overnight was a huge undertaking. Against the need to keep businesses and essential services running (including public sector bodies like councils), security may not have been the primary considerations. Most organisations have now spent time going back to “plug the gaps”, but there’s no doubt that a proliferation of devices and the increased use of cloud services has left companies more vulnerable. McAfee found a 630% increase in attacks on cloud infrastructure since the start of the pandemic, and in just one month between March and April 2020, IBM recorded a 6,000% increase in phishing attempts. As well as ensuring remote/flexible working policies are up to date, there are a host of tactics companies can employ to address security. This includes mobile device management and endpoint security, strict patch management and complete backing up of the Microsoft 365 environment, which many assume is done automatically by Microsoft, but isn’t, which can result in a catastrophic loss of data. Another security approach is to focus on identity and access management (IAM) to enable single sign-on and smart identity management.


How Financial Institutions Can Deal with Unstructured Data Overload

Emerging big data analytics solutions which leverage machine learning (ML) can parse through data to identify important information. These tools allow financial institutions, particularly investment management firms uncover the crucial business insights that lie within the unstructured data, giving them an immediate competitive advantage over their peers that are not leveraging AI in this way. These analytics tools can uncover new market insights, allowing teams at investment management firms to get a deeper understanding of businesses and industries, allowing them to make better investment and trading decisions. For example, even after an investment management firm has holistically narrowed down the number of news articles necessary to review, there still might be thousands of texts to read through over the course of a month. Adding in an ML solution here would help the portfolio manager identify which stories are most relevant based on the language and nuanced phrasing within the text. It would give each article a relevant scoring, and save the PM the countless hours that they’d have otherwise spent reading through the articles.


Proving who you are online is still a mess. And it's not getting better

For the past two decades, the UK government has looked at ways to enable people to easily and reliably identify themselves, with little success. Unlike in other countries, a national ID card to carry around in your pocket now seems to be firmly off the table; but instead, the concept of creating a "digital identity" is gathering pace. Rather than digging through piles of archived paper-based documents, a digital identity would let people instantly prove certified information about themselves, flashing their credentials, for instance, through an app on their phone. Although the concept is not new, the idea is gaining renewed attention. The Department for Digital, Culture, Media and Sports (DCMS), in fact, recently unveiled plans to create what it called a digital identity "trust framework". The idea? To lay down the ground rules surrounding the development of new technologies that will allow people to prove something about themselves digitally. This could take the form of a digital "wallet", which individuals could keep on their devices and fill with any piece of information, or attributes about themselves that they deem useful. The wallet could includes basic information like name, address or age, but also data from other sources, at the user's own convenience.


UK Set to Boost Cybersecurity Operations

Johnson has said in Parliament that the creation of the NCF is designed to strengthen Britain's cybersecurity posture and give the country new defensive and offensive capabilities. "Our enemies are also operating in increasingly sophisticated ways, including in cyberspace," Johnson says. "Rather than being confined to some distant battlefield, those that seek to do harm to our people can reach them through the mobile phones in their pockets or the computers in their homes. To protect our citizens, U.K. defense therefore needs to operate at all times with leading, cutting-edge technology." Currently, the NCF carries out operations such as interfering with a mobile phone to prevent a terrorist being able to communicate with their contacts; helping to prevent cyberspace from being used as a global platform for serious crimes, including the sexual abuse of children; and keeping U.K. military aircraft safe from targeting by weapons systems. In addition to the NCF, last year the Ministry of Defense created the 13th Signals Regiment, the U.K.'s first dedicated cyber regiment, and expanded the Defence Cyber School. While he acknowledged the benefits of a more cyber-capable military, Cracknell pointed out that, "We don’t have a solid security foundation, and until all businesses and CNI entities are at that level, we are wasting resources by going on the offensive."


DDoS's Evolution Doesn't Require a Security Evolution

The idea of monetizing DDoS attacks dates back to the 1990s. But the rise of DDoS-for-hire services and cryptocurrencies has radically changed things. "It's never been easier for non-specialists to become DDoS extortionists," Dobbins explains. This has led to a sharp uptick in well-organized, prolific, and high-profile DDoS extortion campaigns. Today, cybercrime groups deliver ransom demands in emails that threaten targets with DDoS attacks. Most of these are large attacks above 500 gigabytes per second, and a few top out at 2 terabytes per second. Ransom demands may hit 20 Bitcoin (approximately $1 million). Attacks that revolve around ideological conflicts, geopolitical disputes, personal revenge, and other factors haven't disappeared. But the focus on monetization has led attackers to increasingly target Internet service providers, software-as-a-service firms and hosting/virtual private server/infrastructure providers. This includes wireless and broadband companies. "We've seen the DDoS attacker base both broaden and shift toward an even younger demographic," Dobbins says. According to Neustar's Morales, reflection and amplification attacks continue to be the most prominent because of their inherent anonymity and ability to reach very high bandwidth without requiring a lot of attacking hosts.


Securing a hybrid workforce with log management

When companies shifted to a remote workforce in response to the COVID-19 pandemic, cybercriminals continued to launch attacks. However, they did not target distantly managed corporate networks. Instead, they looked to exploit organizations where workforce members did their jobs on home networks and devices. Because home networks often lack the robust security controls that the enterprise uses, they become attractive gateways for malicious actors. During the COVID-19 lockdowns, cybercriminals increasingly leveraged the Windows Remote Desktop Protocol (RDP) as an attack vector. RDP allows users to connect remotely to servers and workstations via port 3389. However, misconfigured remote access often creates a security risk. There has been a massive increase in RDP attack attempts in 2020. Windows computers with unpatched RDP can be used by malicious actors to move within the network and deposit malicious code (e.g., ransomware). Devices getting infected with malware is a common occurrence when users work outside the corporate network. Since IT departments cannot push software updates through to the devices, security teams need to monitor for potential malware infections. Event logs can detect potentially malicious activity when used correctly.


Cryptophone Service Crackdown: Feds Indict Sky Global CEO

Sky Global's CEO has disputed those allegations and said he has received no direct notice of any charges being filed against him or any extradition request. "Sky Global’s technology works for the good of all. It was not created to prevent the police from monitoring criminal organizations; it exists to prevent anyone from monitoring and spying on the global community," Eap says in a statement released Sunday and posted to the company's website. ... "The unfounded allegations of involvement in criminal activity by me and our company are entirely false. I do not condone illegal activity in any way, shape or form, and nor does our company." Eap has also disputed claims by police that they cracked Sky Global's encryption. Previously, Sky Global had offered a $5 million reward to anyone able to demonstrate that they had cracked the encryption. Following a two-year investigation into Sky Global and its customers, last week, police in Belgium, France and the Netherlands launched numerous house searches, leading to hundreds of arrests of alleged users - including three attorneys in Antwerp, Belgium - as well as the seizure of thousands of kilograms of cocaine and methamphetamine, hundreds of firearms, millions of euros in cash as well as diamonds, jewelry, luxury vehicles and police uniforms, officials say.


Optimize your CloudOps: 8 tricks CSPs don't want you to know

Leveraging security managers that span all your traditional systems and public clouds is three times more effective than following a cloud-native approach. Similar to tip No. 1 above, cloud-native security systems operate best on their native cloud. Eventually you'll have silos of security systems, each solving tactical security problems for their native clouds. What you need is an overarching security ops platform that can manage security from cloud to cloud as well as for traditional systems, and perhaps with emerging technologies such as edge computing. Again, this is about finding something "cross-cloud" that exists today, and to do that you'll have to look for third-party providers. If you don't choose cross-cloud security now, the move from cloud-native to cross-cloud security will happen when your security silos become too complex to maintain and the first breach occurs. At that point, the transformation from cloud-native to cross-cloud security is difficult and costly. While this trick causes some debate from time to time, most experts agree: Abstracting public clouds for performance monitoring is a much better approach than just monitoring a single cloud using its cloud-native system.


AI One Year Later: How the Pandemic Impacted the Future of Technology

Those changing consumer behaviors created an abrupt reality for data science teams: predictive AI and machine learning (ML) models and the data they are derived from were almost instantly outdated, and in many cases reduced to irrelevance. In the past, these models were based on historical data from several years of behavioral patterns. But in a world of tightened spending, limited purchasing options, changing demand patterns, and restricted engagement with customers, that historical data no longer applied. To combat this problem -- at a time when companies could not afford inaccurate predictions or lost revenue -- AI teams turned to such solutions as real-time, ever-changing forecasting. By constantly updating and tuning their predictive models to include incoming data from the new pandemic-driven patterns, organizations were able to reduce data drift and more effectively chart their paths through the crisis and recovery period. With their hand forced, companies needed to make difficult choices during the spring of 2020. Do they put their projects and initiatives on pause and wait for the pandemic to subside, or push forward in applying AI as a competitive differentiator during these challenging times?


What is Agile leadership? How this flexible management style is changing how teams work

As Agile development took hold in IT departments, so tech chiefs started thinking about how the approach could be used – not just to create software products – but to lead teams and projects more generally. As this happened, CIOs started talking about the importance of Agile leadership. Over the past decade, the use of Agile as a technique for leading and completing projects has moved beyond the IT department and across all lines of business. The increased level of collaboration between tech organisations and other functions, particularly marketing and digital, has helped to feed the spread of Agile management. ... Although Agile leadership leans heavily on the principles and techniques of Agile software development, such as iteration, standups and retrospectives, it's probably fair to say that it's a management style that involves a general stance rather than a hard-and-fast set of rules. Mark Evans, managing director of marketing and digital at Direct Line, says the key to effective Agile management is what's known as servant leadership, a leadership philosophy in which the main goal of the leader is to serve.



Quote for the day:

"Integrity is the soul of leadership! Trust is the engine of leadership!" -- Amine A. Ayad