Showing posts with label startup. Show all posts
Showing posts with label startup. Show all posts

Daily Tech Digest - January 24, 2026


Quote for the day:

"Definiteness of purpose is the starting point of all achievement." -- W. Clement Stone



When a new chief digital officer arrives, what does that mean for the CIO?

One reason the CDO can unsettle CIOs is that the title has never had a consistent meaning. Isaac Sacolick, president and founder of StarCIO, said organizations typically create the role for one of two reasons. "Some organizations split off a CDO role because the CIO is overly focused on infrastructure and operations, and the business's customer and employee experiences, AI and data initiatives, and other innovations aren't meeting expectations," Sacolick said. "In other organizations, the CDO is a C-level title for the head of product management and UX/design functions, and reports to the CIO." Those two models lead to very different outcomes. In the first, the CDO is positioned as a corrective measure; in the second, the role is an extension of the CIO's broader operating model. Without clarity on which model is being pursued, confusion tends to follow. ... Across the experts, there was strong agreement on one point: The CIO remains central to the enterprise digital operating model, even as new roles emerge. "CIOs need to own the digital operating model and evolve it for the AI era," Sacolick said, noting that this increasingly involves "product-centric, agile, multi-disciplinary team organizational models." Ratcliffe echoed that sentiment, emphasizing accountability and trust. "The CIO should be the single point of ownership with the deep expertise feeding into it so there is consistency, business acumen and trust built within the technology function," he said.


Responsible AI moves from principle to practice, but data and regulatory gaps persist: Nasscom

The data shows a strong correlation between AI maturity and responsible practices. Nearly 60% of companies that say they are confident about scaling AI responsibly already have mature RAI frameworks in place. Large enterprises are leading this transition, with 46% reporting mature practices. Startups and SMEs trail behind at 16% and 20% respectively, but Nasscom sees this as ecosystem-wide momentum rather than a gap, given the growing willingness among smaller firms to learn, comply, and invest. ... Workforce enablement has become a central pillar of this transition. Nearly nine out of ten organisations surveyed are investing in sensitisation and training around Responsible AI. Companies report the highest confidence in meeting data protection obligations—reflecting relatively mature privacy frameworks—but monitoring-related compliance continues to be a concern. Accountability for AI governance still sits largely at the top. ... As AI systems become more autonomous, Responsible AI is increasingly seen as the deciding factor for whether organisations can scale with confidence. Nearly half of mature organisations believe their current frameworks are prepared to handle emerging technologies such as agentic AI. At the same time, industry experts caution that most existing frameworks will need substantial updates to address new categories of risk introduced by more autonomous systems. The report concludes that sustained investment in skills, governance mechanisms, high-quality data, and continuous monitoring will be essential.


AI-induced cultural stagnation is no longer speculation − it’s already happening

Regardless of how diverse the starting prompts were – and regardless of how much randomness the systems were allowed – the outputs quickly converged onto a narrow set of generic, familiar visual themes: atmospheric cityscapes, grandiose buildings and pastoral landscapes. Even more striking, the system quickly “forgot” its starting prompt. ... For the past few years, skeptics have warned that generative AI could lead to cultural stagnation by flooding the web with synthetic content that future AI systems then train on. Over time, the argument goes, this recursive loop would narrow diversity and innovation. Champions of the technology have pushed back, pointing out that fears of cultural decline accompany every new technology. Humans, they argue, will always be the final arbiter of creative decisions. ... The study shows that when meaning is forced through such pipelines repeatedly, diversity collapses not because of bad intentions, malicious design or corporate negligence, but because only certain kinds of meaning survive the text-to-image-to-text repeated conversions. This does not mean cultural stagnation is inevitable. Human creativity is resilient. Institutions, subcultures and artists have always found ways to resist homogenization. But in my view, the findings of the study show that stagnation is a real risk – not a speculative fear – if generative systems are left to operate in their current iteration. 



Europe votes to tackle deep dependence on US tech in sovereignty drive

The depth of European reliance on foreign technology providers varies across sectors but remains substantial throughout the stack. In cloud infrastructure alone, Amazon, Microsoft, and Google command 70% of the European market, while local providers including SAP, Deutsche Telekom, and OVHcloud collectively hold just 15%. ... “Recent geopolitical tensions show that the issue of Europe’s digital sovereignty is of the utmost importance,” MichaƂ Kobosko, the Renew Europe MEP who negotiated the report text, said in a statement. “If we do not act now to reduce Europe’s technological dependence on foreign actors, we run the risk of becoming a digital colony.” ... “Due to geopolitical tensions, the driver has shifted to reducing foreign digital dependency across the entire technology stack. European CIOs are now tasked with redesigning their approach to semiconductors, cloud, software, and AI, upending two decades of established strategy. It’s not going to be easy, it’s not going to be cheap, and it’s going to span multiple generations of CIOs.” When asked whether European enterprises will see viable sovereign alternatives across core technology areas, Henein said: The answer is yes, but the time horizon is potentially more than a decade. Europe has been supporting US technology providers through licensing agreements for the better part of the last two decades. ... A key question is whether the report’s proposed preferential procurement policies can actually change market realities, given the 


One-time SMS links that never expire can expose personal data for years

One of the most significant findings involved how long these links remained active. All 701 confirmed URLs still worked when the researchers accessed them, often long after the original message was sent. More than half of the exposed links were between one and two years old. About 46% were older than two years. Some dated back to 2019. Public SMS gateways rarely retain messages for that long, which suggests that the actual lifetime of many links may extend even further. The risk starts as soon as a private link is exposed, but it grows with time. The longer a link stays active, the more chances there are for abuse through logs, forwarding, compromised devices, message interception, phone number recycling, or third-party access. ... In many services, the link carried a token passed to backend APIs. Some pages rendered data server side, while others fetched information after load. Only five services placed personal data directly inside the URL itself, though access results were similar once the link was opened. This design assumes the link remains private. According to Danish, product pressure plays a central role in keeping this pattern widespread. ... In one case, an order tracking page displayed an address, while API responses included phone numbers, geolocation data, and driver details. In another, a loan service returned bank routing numbers and Social Security numbers that were only visible in network logs. This data became reachable as soon as the link was opened, even before the page finished loading. 


How enterprise architecture and start-up thinking drive strategic success

Strategy is now judged less by the quality of vision decks and more by how quickly enterprises can test, learn and scale what works and is valuable. To beat the heat, enterprises increasingly combine the discipline of enterprise architecture with the speed and adaptability associated with a start-up mindset. ... Modern enterprise architecture is less about cataloging systems and more about shaping how an enterprise senses opportunities, mobilizes resources and transforms at pace. In a high-performing enterprise, it acts as a bridge between strategy and execution in three concrete ways, i.e., alignment and clarity, transparency and risk management and decision support and adaptive governance. ... Start-ups and scale-ups operate under uncertainty, but they thrive by learning in short cycles, minimizing waste and scaling only what demonstrates traction. When large enterprises infuse enterprise architecture with similar principles, the function becomes a multiplier for speed rather than a constraint. ... Cross-functional innovation and flexible governance complete the picture. In many enterprises, architects now embed directly in domain or platform teams, joining strategic backlog refinement, incident reviews and design sessions as peers. In a large healthcare network, for instance, enterprise architecture practitioners joined clinical, operations and analytics teams to co-design a data platform that could support both operational reporting and AI-driven decision support.


From Conflict To Collaboration: How Tension Can Strengthen Your Team

Letting tensions simmer is one of the most common leadership mistakes. The longer a disagreement sits in the corner, the more toxic it becomes. ... Teams function better when they normalize honest conversation before things go sideways. A simple practice—opening meetings with "wins and worries"—creates a habit of surfacing concerns early. Netflix cofounder Reed Hastings echoes this principle: "Only say about someone what you will say to their face." It’s a powerful expectation. Candor reduces gossip, eliminates guesswork and gives leaders clarity long before emotions get out of hand. ... When conflict arises, people don’t immediately need solutions. What they need is to feel heard. It’s vital to fully understand their concerns so there is no ambiguity. Repeat your understanding of their position before giving your input. It’s remarkable how much progress can be made when people feel genuinely heard. ... Compromise has an unfair reputation in business culture, as if giving an inch signals defeat. In practice, it’s a recognition that multiple perspectives may hold merit. Good leaders invite both sides to walk through their rival viewpoints together. When people better understand the context behind each position, they’re far more willing to find common ground that moves the team forward. ... Many conflicts resurface not because the solution was wrong, but because leaders assumed the first conversation fixed everything. 


Six tips to gain control over your cloud spending

The first step any organization should take before shifting a workload to the cloud is performing proper due diligence on ROI. It isn’t always the case that moving workloads to the cloud will translate into financial savings. Many variables should be considered when calculating ROI, including current infrastructure, licensing and hiring. ... A formal cloud governance framework establishes rules, policies, and processes that formalize how cloud resources will be accessed, used, and retired. Accurately matching cloud resources to workload demands improves resource utilization and minimizes waste. ... FinOps, short for financial operations, is a management discipline that involves collaboration between finance, operations and development teams to manage cloud spending. By implementing tools and processes for cost tracking, budgeting, and forecasting, businesses can gain insights into their cloud expenses and identify areas for optimization. ... Providers offer a variety of discounts that can significantly reduce cloud costs. For example, reserved instance pricing models offer discounts to customers who reserve cloud resources over a fixed period. Some providers offer tiered pricing models in which the cost per unit decreases as you consume more resources. ... You may find that moving some workloads to the cloud offers no significant performance advantages. Repatriating some applications, data and workloads back to on-premises infrastructure can often improve performance while reducing cloud spending.


These 4 big technology bets will reshape the global economy in 2026

The impact of disruptive technologies will have a material impact on real GDP growth. ARK suggested that capital investment alone, catalyzed by disruptive innovation platforms, could add 1.9% to annualized real GDP growth this decade. Each innovation platform, AI, public blockchains, robotics, energy storage, and multiomics, should provide a structural boost to global growth. ... According to ARK research, hyperscalers are expected to spend more than $500 billion on capital expenditures (Capex) in 2026, nearly four times the $135 billion spent in 2021, the year before the launch of ChatGPT in 2022. ... ARK forecasted that AI agents could facilitate more than $8 trillion in online consumption by 2030. ARK noted that as consumers delegate more decisions to intelligent systems, AI agents should capture an increasing share of digital transactions, from 2% of online spend in 2025 to around 25% by 2030 ... AI agents are becoming more productive. ARK found that advances in reasoning capability, tool use, and extended context are driving an exponential increase in the capability of AI agents. The duration of tasks these agents can complete reliably increased 5 times, from six minutes to 31 minutes, in 2025. ... ARK suggested robots are a growing part of the labor force and took a historical look at productivity and labor hours. As productivity increased, each hour of labor became more valuable, enabling increased output with fewer hours, as living standards continued to rise


Half of agentic AI projects are still stuck at the pilot stage

The main barriers to full implementation, respondents said, are concerns with security, privacy, or compliance, cited by 52%, followed by technical challenges to managing agents at scale, at 51%. “Organizations are not slowing adoption because they question the value of AI, but because scaling autonomous systems safely requires confidence that those systems will behave reliably and as intended in real-world conditions,” said Alois Reitbauer, chief technology strategist at Dynatrace. Seven-in-ten agentic AI–powered decisions are still verified by humans, and 87% of organizations are actively building or deploying agents that require human supervision. ... A recurring pain point for enterprises tinkering with agentic AI tools lies in observability, according to Dynatrace. Observability of these autonomous systems is needed across every stage of the life cycle, from development and implementation through to operationalization. Observability is most used in implementation, at 69%, followed by operationalization at 57% and development at 54%. “Observability is a vital component of a successful agentic AI strategy. As organizations push toward greater autonomy, they need real-time visibility into how AI agents behave, interact, and make decisions,” Reitbauer said. “Observability not only helps teams understand performance and outcomes, but it provides the transparency and confidence required to scale agentic AI responsibly and with appropriate oversight.”

Daily Tech Digest - January 15, 2026


Quote for the day:

"You have to have your heart in the business and the business in your heart." -- An Wang


AI agents can talk — orchestration is what makes them work together

“Agent-to-agent communications is emerging as a really big deal,” G2’s chief innovation officer Tim Sanders told VentureBeat. “Because if you don't orchestrate it, you get misunderstandings, like people speaking foreign languages to each other. Those misunderstandings reduce the quality of actions and raise the specter of hallucinations, which could be security incidents or data leakage.” ... In another critical evolution in the agentic era, human evaluators will become designers, moving from human-in-the-loop to human-on-the-loop, according to Sanders. That is: They will begin designing agents to automate workflows. Agent builder platforms continue to innovate their no-code solutions, Sanders said, meaning nearly anyone can now stand up an agent using natural language. “This will democratize agentic AI, and the super skill will be the ability to express a goal, provide context and envision pitfalls, very similar to a good people manager today.” ... Organizations should begin “expeditious programs” to infuse agents across workflows, especially with highly repetitive work that poses bottlenecks. Likely at first, there will be a strong human-in-the-loop element to ensure quality and promote change management. “Serving as an evaluator will strengthen the understanding of how these systems work,” Sanders said, “and eventually enable all of us to operate upstream in agentic workflows instead of downstream.”


Integrating AI-Enhanced Microservices in SAFe 5.0 Framework

AI-driven microservices can be a game-changer for Lean Portfolio Management within SAFe. By optimizing decision analytics and enhancing value stream performance, AI simplifies, rather than complicates. I know what you’re thinking: AI tools can add complexity. One client put this to the test, and we found AI helped reduce the noise. It sliced through the data smog to identify hidden value streams and automate mundane tasks like financial forecasting and risk management. ... Integrating decentralized AI models into SAFe’s ARTs can significantly enhance their autonomy. During a high-stakes project, we shifted from a centralized to a decentralized model, which allowed ARTs to self-optimize and adapt to shifting priorities seamlessly. It was like giving ARTs a brain of their own. Decentralized AI models reduce the bottlenecks you'd typically encounter in centralized systems. Think of the ARTs as small startups within the larger enterprise ecosystem, each capable of making swift, informed decisions. ... This isn’t just a tech enthusiast's dream—it's an emerging reality. The maturity of AI technologies spells a future where enterprises aren’t just keeping up; they’re setting the pace. So, if there’s a single, actionable insight to glean from my journey, it’s this: enterprises need to actively pursue cross-industry collaborations, invest in AI-powered microservices, and hone their Agile professionals’ skill sets.


Incorporating Geopolitical Risk Into Your IT Strategy

IT organizations know how to plan for unexpected outages, but even the most rigorously designed strategy is vulnerable to the shifting winds of geopolitics. CIOs and technology leaders need to know how their organizations will respond to geopolitical disruptions, and scenario planning needs to be a priority. ... "The IT department can treat geopolitical disruption as an expected operational variable rather than an unforeseen catastrophe. Good and tested enterprise risk management frameworks, investment in government affairs partnerships and ongoing board engagement should start to manage and prepare for this," Dixon said. CIOs need to do scenario modeling around the risks facing their enterprise, and evaluate how IT is teaming with business units, security teams and the CISO on a cohesive tech strategy that builds security, including artificial intelligence security, in from the ground up, said Sean Joyce ... "You're as strong as your weakest link," Joyce said. "As geopolitical risk becomes more prominent, you're going to see tools like cyber being leveraged by countries, particularly those that don't have stronger military or other capabilities. For some, it may be the only tool they can leverage." Physical infrastructure, geography and power supplies are also now areas of risk CIOs need to consider, and infrastructure strategy must align with sustainability, energy realities and geopolitical stability. 


Six Architecture Challenges for Startups

The risk is not that the first version is imperfect; that is inevitable. The risk is that the team keeps layering new functionality on top of an accidental architecture. At some point, the cost of change becomes so high that every small modification feels dangerous. The architectural challenge is to intentionally decide where to accept debt and where to invest in structure. Startups need a minimal set of principles – for example, clear domain boundaries, basic API hygiene, and a simple deployment model – that allow speed without locking the product into a dead end. ... If the product team is still validating pricing models, redefining the customer journey, or experimenting with different verticals, any rigid decomposition can turn into friction. Yet avoiding boundaries altogether leads to a “big ball of mud” that is equally hard to evolve. A practical approach is to use provisional boundaries based on current value streams – onboarding, transaction processing, analytics, etc. – and treat them as hypotheses. The challenge is not to find the perfect structure from day one, but to keep those boundaries explicit and adjustable as the business model evolves. ... Startups must make conscious decisions about where they are comfortable being tightly coupled to a provider and where they need portability. That requires viewing cloud services through a business lens: What is strategic IP, what is replaceable, and what is pure commodity? Aligning these categories with architectural choices is a non-trivial design challenge, not just a procurement decision. 


Platform-as-a-Product: Declarative Infrastructure for Developer Velocity

Without centralized guardrails, teams often compensate by over-allocating resources "to be safe", leading to inconsistent environments and unnecessary cloud spend that is only discovered after deployment. ... What is missing is a developer-friendly abstraction that brings these related concerns together. Developers need a way to express intent (not only what infrastructure is required, but also how the application should be built, deployed, configured across environments, secured, and sized) without having to implement the mechanics of each underlying system. From a platform engineering perspective, this abstraction represents the core of an internal developer platform and can be implemented as a lightweight Python-based platform framework. ... The platform comprises several interconnected components. GitLab pipelines coordinate everything, pulling code from repositories, building and unit testing applications (with tests written by developers), checking security, creating cloud infrastructure with Terraform/IaC, and deploying to Kubernetes clusters with Puppet configuration management. The configuration YAML file controls all of this, telling each component what to do. The architecture clearly separates concerns: the CI pipeline handles code building, testing, and vulnerability scanning. CD pipeline handles deployment: creating cloud resources, updating Kubernetes, and configuring environments. 


(Re)introducing Adaptive Business Continuity

Adaptive BC is designed to provide a framework that delivers better outcomes when organizations deal with losses. The result may be a reduction in documentation (something I greatly favor) but that is not a stated goal. ... My experience over the years has led me to conclude that trying to define priorities for the resumption of services is wasted effort. Many activities can take place in parallel, and priorities will change when disasters occur. A perfect example is the governmental lockdowns and health authority mandates that followed the emergence of COVID. The result is that demand for products and services changed drastically, upending previous priorities. Priorities may be defined following adaptive principles, but it is not at all a stated component of the Adaptive framework. ... For a number of reasons, I would like to see the word “plan” used a lot less within our profession. Seeing the word “strategy” in its place would be a step in the right direction. Strategy improvement is not, however, a key outcome of Adaptive BC efforts. There is some benefit to having clearly defined recovery strategies, but strategies only provide benefit to competent and empowered teams armed with the resources they need to carry out the mission. For this reason, I always emphasize the importance of focusing efforts on capabilities and consider plans and strategies as little more than supporting tools for any business continuity program. The improvement of strategies and/or plans is simply not an expected outcome of Adaptive BC work.


Exactly What To Automate With AI In 2026 For Faster Business Growth

Most founders automate the wrong things. They start with the flashy stuff, the complicated tools and fancy dashboards, while ignoring the repetitive tasks quietly draining their hours. But you need faster, cleaner growth by removing friction from the activities that actually grow your business. ... You shouldn't embark on a day's worth of admin tasks every time a new client says yes. It will only slow you down. Make it easy for them to pay, get a receipt, complete an onboarding form, and submit the required information. On your end, have the Google Drive folders, follow-up emails, and team briefings set up without you lifting a finger. Question everything you currently do manually. There is no reason it couldn't be an AI agent handling the sequence. All the tools you pay for already have integrations with each other; You're just not using them. The goal is that you could sign client after client because onboarding takes minutes, not hours. ... AI-generated content is awful when you use it wrong. But that doesn't mean you shouldn't involve AI in your content production process. Content still matters in marketing, whether long-form articles, videos, or social media visuals. You need to be part of the conversation, but only with relevant, authentic material. You cannot outproduce everyone manually, so use automations and retain your human genius for the finishing touches. ... The more your life admin runs on autopilot, the more you free up time and energy for your business. 


What is AI fuzzing? And what tools, threats and challenges generative AI brings

The way traditional fuzzing works is you generate a lot of different inputs to an application in an attempt to crash it. Since every application accepts inputs in different ways, that requires a lot of manual setups. Security testers would then run these tests against their companies’ software and systems to see where they might fail. ... Today, generative artificial intelligence has the potential to automate this previously manual process, coming up with more intelligent tests, and allowing more companies to do more testing of their systems. ... But there’s a third angle involved here. What if, instead of trying to break traditional software, the target was an AI-powered system? This creates unique challenges because AI chatbots are not predictable and can respond differently to the same input at different times. ... AI fuzzing can also help speed up the discovery of vulnerabilities, Roy says. “Traditionally, testing was always a function of how many days and weeks you had to test the system, and how many testers you could throw at the testing,” he says. “With AI, we can expand the scale of the testing.” ... Another use of AI in fuzzing is that it takes more than a set of test cases to fully test an application — you also need a mechanism, a harness, to feed the test cases into the app, and in all the nooks and crannies of the application. “If the fuzzing harness does not have good coverage, then you may not uncover vulnerabilities through your fuzzing,” says Dane Sherrets, staff innovations architect for emerging technologies at HackerOne


CISOs flag gaps in third-party risk management

CISOs rank third-party cyber risk among their highest-impact threats. Vendor relationships touch nearly every core business function, from cloud infrastructure and software development to data processing and AI services. Each added dependency expands the attack surface and increases the number of organizations involved in protecting sensitive systems and data. ... Only a small portion of organizations report visibility across third-, fourth-, and nth-party relationships. Most operate with partial insight limited to direct vendors or a narrow segment of the extended supply chain. CISOs say limited visibility complicates incident response, risk prioritization, and compliance planning. When a breach emerges several layers removed from a known vendor, security teams may struggle to understand exposure, timelines, and downstream impact. ... CISOs report rising regulatory scrutiny tied to third-party cyber risk. Regulatory frameworks place greater expectations on organizations to demonstrate oversight across vendor ecosystems, including indirect relationships. Only a minority of organizations feel ready to meet upcoming requirements without major changes. Most report progress underway, with further work needed to align processes, tooling, and internal coordination. Third-party risk management involves legal, procurement, compliance, and executive leadership alongside security teams. ... At the same time, AI adoption accelerates within vendor risk management itself. 


Anti-fragility – what is it and why should it be the goal for your organisation?

That ability to thrive in the face of disruption must become the basis for improved resilience. Modern organisations shouldn’t strive for survival, but for continual improvement. In the cyber sphere, that is crucial. Threat actors are constantly changing tack, targeting new CVEs, and executing increasingly complicated supply chain attacks. Resilience must therefore move in tandem as an ongoing process of learning and adapting. That is the crux of anti-fragility. It defines systems that thrive and improve from stress, volatility, disorder and shocks, rather than just resisting them. If a security model is only designed to recover, it remains just as vulnerable as before. But an anti-fragile approach actively benefits from each attack, identifying weaknesses, addressing them, and adapting as needed. ... Increasingly, organisations are recognising the value in anti-fragility as a strategy and more will adopt it next year. However, getting there means going beyond regulatory compliance. Compliance lays the foundations from which successful cybersecurity can be built, yet many currently see it as the finished structure. There are several problems with that. Security legislation frequently lags behind the threat landscape, and so the gap between a new threat emerging and a new law coming in to address it can stretch over the course of years. Organisations must therefore understand that compliance doesn’t equal protection. 

Daily Tech Digest - April 25, 2025


Quote for the day:

"Whatever you can do, or dream you can, begin it. Boldness has genius, power and magic in it." -- Johann Wolfgang von Goethe


Revolutionizing Application Security: The Plea for Unified Platforms

“Shift left” is a practice that focuses on addressing security risks earlier in the development cycle, before deployment. While effective in theory, this approach has proven problematic in practice as developers and security teams have conflicting priorities. ... Cloud native applications are dynamic; constantly deployed, updated and scaled, so robust real-time protection measures are absolutely necessary. Every time an application is updated or deployed, new code, configurations or dependencies appear, all of which can introduce new vulnerabilities. The problem is that it is difficult to implement real-time cloud security with a traditional, compartmentalized approach. Organizations need real-time security measures that provide continuous monitoring across the entire infrastructure, detect threats as they emerge and automatically respond to them. As Tager explained, implementing real-time prevention is necessary “to stay ahead of the pace of attackers.” ... Cloud native applications tend to rely heavily on open source libraries and third-party components. In 2021, Log4j’s Log4Shell vulnerability demonstrated how a single compromised component could affect millions of devices worldwide, exposing countless enterprises to risk. Effective application security now extends far beyond the traditional scope of code scanning and must reflect the modern engineering environment. 


AI-Powered Polymorphic Phishing Is Changing the Threat Landscape

Polymorphic phishing is an advanced form of phishing campaign that randomizes the components of emails, such as their content, subject lines, and senders’ display names, to create several almost identical emails that only differ by a minor detail. In combination with AI, polymorphic phishing emails have become highly sophisticated, creating more personalized and evasive messages that result in higher attack success rates. ... Traditional detection systems group phishing emails together to enhance their detection efficacy based on commonalities in phishing emails, such as payloads or senders’ domain names. The use of AI by cybercriminals has allowed them to conduct polymorphic phishing campaigns with subtle but deceptive variations that can evade security measures like blocklists, static signatures, secure email gateways (SEGs), and native security tools. For example, cybercriminals modify the subject line by adding extra characters and symbols, or they can alter the length and pattern of the text. ... The standard way of grouping individual attacks into campaigns to improve detection efficacy will become irrelevant by 2027. Organizations need to find alternative measures to detect polymorphic phishing campaigns that don’t rely on blocklists and that can identify the most advanced attacks.


Does AI Deserve Worker Rights?

Chalmers et al declare that there are three things that AI-adopting institutions can do to prepare for the coming consciousness of AI: “They can (1) acknowledge that AI welfare is an important and difficult issue (and ensure that language model outputs do the same), (2) start assessing AI systems for evidence of consciousness and robust agency, and (3) prepare policies and procedures for treating AI systems with an appropriate level of moral concern.” What would “an appropriate level of moral concern” actually look like? According to Kyle Fish, Anthropic’s AI welfare researcher, it could take the form of allowing an AI model to stop a conversation with a human if the conversation turned abusive. “If a user is persistently requesting harmful content despite the model’s refusals and attempts at redirection, could we allow the model simply to end that interaction?” Fish told the New York Times in an interview. What exactly would model welfare entail? The Times cites a comment made in a podcast last week by podcaster Dwarkesh Patel, who compared model welfare to animal welfare, stating it was important to make sure we don’t reach “the digital equivalent of factory farming” with AI. Considering Nvidia CEO Jensen Huang’s desire to create giant “AI factories” filled with millions of his company’s GPUs cranking through GenAI and agentic AI workflows, perhaps the factory analogy is apropos.


Cybercriminals switch up their top initial access vectors of choice

“Organizations must leverage a risk-based approach and prioritize vulnerability scanning and patching for internet-facing systems,” wrote Saeed Abbasi, threat research manager at cloud security firm Qualys, in a blog post. “The data clearly shows that attackers follow the path of least resistance, targeting vulnerable edge devices that provide direct access to internal networks.” Greg Linares, principal threat intelligence analyst at managed detection and response vendor Huntress, said, “We’re seeing a distinct shift in how modern attackers breach enterprise environments, and one of the most consistent trends right now is the exploitation of edge devices.” Edge devices, ranging from firewalls and VPN appliances to load balancers and IoT gateways, serve as the gateway between internal networks and the broader internet. “Because they operate at this critical boundary, they often hold elevated privileges and have broad visibility into internal systems,” Linares noted, adding that edge devices are often poorly maintained and not integrated into standard patching cycles. Linares explained: “Many edge devices come with default credentials, exposed management ports, secret superuser accounts, or weakly configured services that still rely on legacy protocols — these are all conditions that invite intrusion.”


5 tips for transforming company data into new revenue streams

Data monetization can be risky, particularly for organizations that aren’t accustomed to handling financial transactions. There’s an increased threat of security breaches as other parties become aware that you’re in possession of valuable information, ISG’s Rudy says. Another risk is unintentionally using data you don’t have a right to use or discovering that the data you want to monetize is of poor quality or doesn’t integrate across data sets. Ultimately, the biggest risk is that no one wants to buy what you’re selling. Strong security is essential, Agility Writer’s Yong says. “If you’re not careful, you could end up facing big fines for mishandling data or not getting the right consent from users,” he cautions. If a data breach occurs, it can deeply damage an enterprise’s reputation. “Keeping your data safe and being transparent with users about how you use their info can go a long way in avoiding these costly mistakes.” ... “Data-as-a-service, where companies compile and package valuable datasets, is the base model for monetizing data,” he notes. However, insights-as-a-service, where customers provide prescriptive/predictive modeling capabilities, can demand a higher valuation. Another consideration is offering an insights platform-as-a-service, where subscribers can securely integrate their data into the provider’s insights platform.


Are AI Startups Faking It Till They Make It?

"A lot of VC funds are just kind of saying, 'Hey, this can only go up.' And that's usually a recipe for failure - when that starts to happen, you're becoming detached from reality," Nnamdi Okike, co-founder and managing partner at 645 Ventures, told Tradingview. Companies are branding themselves as AI-driven, even when their core technologies lack substantive AI components. A 2019 study by MMC Ventures found 40% of surveyed "AI startups" in Europe showed no evidence of AI integration in their products or services. And this was before OpenAI further raised the stakes with the launch of ChatGPT in 2022. It's a slippery slope. Even industry behemoths have had to clarify the extent of their AI involvement. Last year, tech giant and the fourth-most richest company in the world Amazon pushed back on allegations that its AI-powered "Just Walk Out" technology installed at its physical grocery stores for a cashierless checkout was largely being driven by around 1,000 workers in India who manually checked almost three quarters of the transactions. Amazon termed these reports "erroneous" and "untrue," adding that the staff in India were not reviewing live footage from the stores but simply reviewing the system. The incentive to brand as AI-native has only intensified. 


From deployment to optimisation: Why cloud management needs a smarter approach

As companies grow, so does their cloud footprint. Managing multiple cloud environments—across AWS, Azure, and GCP—often results in fragmented policies, security gaps, and operational inefficiencies. A Multi-Cloud Maturity Research Report by Vanson Bourne states that nearly 70% of organisations struggle with multi-cloud complexity, despite 95% agreeing that multi-cloud architectures are critical for success. Companies are shifting away from monolithic architecture to microservices, but managing distributed services at scale remains challenging. ... Regulatory requirements like SOC 2, HIPAA, and GDPR demand continuous monitoring and updates. The challenge is not just staying compliant but ensuring that security configurations remain airtight. IBM’s Cost of a Data Breach Report reveals that the average cost of a data breach in India reached ₹195 million in 2024, with cloud misconfiguration accounting for 12% of breaches. The risk is twofold: businesses either overprovision resources—wasting money—or leave environments under-secured, exposing them to breaches. Cyber threats are also evolving, with attackers increasingly targeting cloud environments. Phishing and credential theft accounted for 18% of incidents each, according to the IBM report. 


Inside a Cyberattack: How Hackers Steal Data

Once a hacker breaches the perimeter the standard practice is to beachhead, and then move laterally to find the organisation’s crown jewels: their most valuable data. Within a financial or banking organisation it is likely there is a database on their server that contains sensitive customer information. A database is essentially a complicated spreadsheet, wherein a hacker can simply click SELECT and copy everything. In this instance data security is essential, however, many organisations confuse data security with cybersecurity. Organisations often rely on encryption to protect sensitive data, but encryption alone isn’t enough if the decryption keys are poorly managed. If an attacker gains access to the decryption key, they can instantly decrypt the data, rendering the encryption useless. ... To truly safeguard data, businesses must combine strong encryption with secure key management, access controls, and techniques like tokenisation or format-preserving encryption to minimise the impact of a breach. A database protected by Privacy Enhancing Technologies (PETs), such as tokenisation, becomes unreadable to hackers if the decryption key is stored offsite. Without breaching the organisation’s data protection vendor to access the key, an attacker cannot decrypt the data – making the process significantly more complicated. This can be a major deterrent to hackers.


Why Testing is a Long-Term Investment for Software Engineers

At its core, a test is a contract. It tells the system—and anyone reading the code—what should happen when given specific inputs. This contract helps ensure that as the software evolves, its expected behavior remains intact. A system without tests is like a building without smoke detectors. Sure, it might stand fine for now, but the moment something catches fire, there’s no safety mechanism to contain the damage. ... Over time, all code becomes legacy. Business requirements shift, architectures evolve, and what once worked becomes outdated. That’s why refactoring is not a luxury—it’s a necessity. But refactoring without tests? That’s walking blindfolded through a minefield. With a reliable test suite, engineers can reshape and improve their code with confidence. Tests confirm that behavior hasn’t changed—even as the internal structure is optimized. This is why tests are essential not just for correctness, but for sustainable growth. ... There’s a common myth: tests slow you down. But seasoned engineers know the opposite is true. Tests speed up development by reducing time spent debugging, catching regressions early, and removing the need for manual verification after every change. They also allow teams to work independently, since tests define and validate interfaces between components.


Why the road from passwords to passkeys is long, bumpy, and worth it - probably

While the current plan rests on a solid technical foundation, many important details are barriers to short-term adoption. For example, setting up a passkey for a particular website should be a rather seamless process; however, fully deactivating that passkey still relies on a manual multistep process that has yet to be automated. Further complicating matters, some current user-facing implementations of passkeys are so different from one another that they're likely to confuse end-users looking for a common, recognizable, and easily repeated user experience. ... Passkey proponents talk about how passkeys will be the death of the password. However, the truth is that the password died long ago -- just in a different way. We've all used passwords without considering what is happening behind the scenes. A password is a special kind of secret -- a shared or symmetric secret. For most online services and applications, setting a password requires us to first share that password with the relying party, the website or app operator. While history has proven how shared secrets can work well in very secure and often temporary contexts, if the HaveIBeenPawned.com website teaches us anything, it's that site and app authentication isn't one of those contexts. Passwords are too easily compromised.

Daily Tech Digest - February 25, 2024

Orgs Face Major SEC Penalties for Failing to Disclose Breaches

"It's a company issue, definitely not just CISO issue. Everybody will be very leery about vetting statements — why should I say this? — without having legal give it their blessing ... because they are so worried about having charges against them for making a statement." The worries will add up to additional costs for businesses. Because of the additional liability, companies will have to have more comprehensive Directors and Officers (D&O) liability insurance that not only covers the legal expenses for a CISO to defend themselves, but also for their expenses during an investigation. Businesses who will not pay to support and protect their CISO may find themselves unable to hire for the position, while conversely, CISOs may have trouble finding supportive companies, says Josh Salmanson, senior vice president of technology solutions at Telos Corp., a cyber risk management firm. "We're going to see less people wanting to be CISOs, or people demanding much higher salaries because they think it may be a very short-term role until they 'get busted' publicly," he says. "The number of people that will have a really ideal environment with support from the company and the funding that they need will likely remain small."


Risk Management Strategies for Tech Startups

As you continue to grow, your risk management strategies will shift. One of the best things you can do as your startup gains traction is to develop a contingency plan. A contingency plan can keep things afloat if you run into an unexpected loss of customers, funding problems, or even a data disaster. Your contingency plan should include, first and foremost, strong cybersecurity practices. Cyberattacks happen with even the largest and most successful conglomerates. While you might not be able to completely stop cyber criminals from getting in, prioritizing protective measures and developing a response plan will make it easier for your business to bounce back if an attack happens. Things like using cloud-based backups, developing strong passwords and authentication practices, and educating your employees on how to keep themselves safe are all great ways to protect your business from hackers. A successful contingency plan should also cover unexpected accidents and incidents. If someone gets injured on the job or your company gets sued, a strong insurance plan needs to be in place to cover legal fees and damages. 


The Architect’s Contract

The architect is a business technology strategist. They provide their clients with ways to augment business with technology strategy in both localized and universal scales. They make decisions which augment the value output of a business model (or a mission model) by describing technology solutions which can fundamentally alter the business model. Some architects specialize in one or more areas of that. But the general data indicated that even pure business architects are called on to rely on their technical skills quite often, and the most technical software architects must have numerous business skills to be successful. ... Governance is not why architects get into the job. The ones that do are generally architect managers not competent architects themselves. All competent architects started out by making things. Proactive, innovation based teams create new architects constantly. Moving up to too high a level of scope makes it very hard to stay a practicing architect. It takes radical dedication to learning to be a real chief architect. Scope is one of the biggest challenges of our field as it is based on the concept of scarcity. Like having city planners ‘design’ homes or skyscrapers or cathedrals. 


Why DevOps is Key to Software Supply Chain Security

Organizations must also evaluate how well existing processes work to protect the business, then strategically add/subtract from there as needed. No matter what solutions are leveraged, more and different tools generate reams of more and different data. What’s important — and to whom? How do I manage the data? When can I trust it? Where do I store it? What problems does the new data help me solve? Organizations will need a way to effectively sift this information and deliver the right data to the right teams at the right time. To preserve the ability to quickly and continuously innovate, it will be important to focus on shifting security left as well as integrating automation whenever and wherever possible. As new security metadata becomes available, such as from SBOMs, new solutions for managing that metadata will be key. An open source initiative sponsored by Google, GUAC is designed to integrate software security information, including SBOMs, attestations and vulnerability data. Users can query the resulting GUAC graph to help answer key security concerns, including proactive, preventive and reactive concerns.


The Future of Computing: Harnessing Molecules for Sustainable Data Management

Molecular computing harnesses the natural propensity of molecules to form complex, stable structures, allowing for parallel processing – an important advantage that enables computational tasks to be performed simultaneously, a feat that current supercomputers can only dream of. Enzymes like polymerases can simultaneously replicate millions of DNA strands, each acting as a separate computing pathway. This capability translates to potential parallel processing operations in the order of 1015, dwarfing the 1010 operations per second of the fastest supercomputers. Energy efficiency is another game-changer. The energy profile of molecular computing is notably low. DNA replication in a test tube requires minimal energy, estimated at less than a millionth of a joule per operation, compared to the approximately 10-4 joules consumed by a typical transistor operation. This translates to a potential reduction in energy consumption by a factor of 105 or more, depending on the operation. To prove our point, training models like GPT-4 require tens of millions of kilowatt-hours; molecular computing could achieve similar results in a fraction of the time and with exponentially less energy.


Role of AI in Data Management Evolution – Interview with Rakesh Singh

Embracing AI-based solutions presents a challenge to organizations centered around governance and maintaining a firm grip on the overall processes. This challenge is particularly present in the financial sector, where maintaining control is not only a preference but a crucial necessity. Therefore, in tandem with the adoption of AI-driven solutions, a concerted emphasis must be placed on ensuring robust governance measures. For financial institutions, the imperative extends beyond the mere integration of AI; it encompasses a holistic commitment to upholding data security, enforcing comprehensive policies, safeguarding privacy, and adhering to stringent compliance standards. Recognizing that the implementation of AI introduces complexities and potential vulnerabilities, it becomes imperative to establish a framework that not only facilitates the effective utilization of AI but also fortifies the organization against risks. In essence, the successful adoption of AI in the financial domain necessitates a dual focus – one on leveraging the transformative potential of AI solutions and the other on erecting a resilient governance structure.


Ransomware Operation LockBit Reestablishes Dark Web Leak Site

Law enforcement agencies behind the takedown, acting under the banner of "Operation Cronos," suggested they would reveal on Friday the identity of LockBit leader LockBitSupp - but did not. "We know who he is. We know where he lives. We know how much he is worth. LockBitSupp has engaged with Law Enforcement :)," authorities instead wrote on the seized leak site. "LockBit has been seriously damaged by this takedown and his air of invincibility has been permanently pierced. Every move he has taken since the takedown is one of someone posturing, not of someone actually in control of the situation," said Allan Liska, principal intelligence analyst, Recorded Future. The re-established leak site includes victim entries apparently made just before Operation Cronos executed the takedown, including one for Fulton County, Ga. LockBit previously claimed responsibility for a January attack that disrupted the county court and tax systems. County District Attorney Fani Willis is pursing a case against former President Donald Trump and 18 co-defendants for allegedly attempting to stop the transition of presidential power in 2020.


Toward Better Patching — A New Approach with a Dose of AI

By default, the NIST operated National Vulnerability Database (NVD) is the source of truth for CVSS scores. But NVD gets its entries from the CVE database, and if there is no completed CVE entry, there is no NVD entry — and therefore no immediately trusted and verifiable CVSS score. Despite this, security teams use whatever CVSS they are told as a primary factor in their vulnerability patch triaging — the higher the score, the greater the perceived likelihood of exploitation with a greater potential for harm – and it is likely to be a score applied by the vulnerability researcher. There is an inevitable delay and confusion (due to ‘responsible disclosure’, possible delays in posting to the CVE database, and an element of subjectivity in the CVSS score). “The delay in CVE scoring often means that defenders face two uphill battles regarding vulnerability management. First, they need a prioritization method to determine which of the thousands of CVEs published each month they should patch,” notes Coalition. “Second, they must patch these CVEs before a threat actor leverages them to target their organization.”


Apple Beefs Up iMessage With Quantum-Resistant Encryption

"To our knowledge, PQ3 has the strongest security properties of any at-scale messaging protocol in the world," Apple's SEAR team explained in a blog post announcing the new protocol. The addition of PQ3 follows iMessage's October 2023 enhancement featuring Contact Key Verification, designed to detect sophisticated attacks against Apple's iMessage servers while letting users verify they are messaging specifically with their intended recipients. IMessage with PQ3 is backed by mathematical validation from a team led by professor David Basin, head of the Information Security Group at ETH ZĂŒrich and co-inventor of Tamarin, a well-regarded security protocol verification tool. Basin and his research team at ETH ZĂŒrich used Tamarin to perform a technical evaluation of PQ3, published by Apple. Also evaluating PQ3 was University of Waterloo professor Douglas Stebila, known for his research on post-quantum security for Internet protocols. According to Apple's SEAR team, both research groups undertook divergent but complementary approaches, running different mathematical models to test the security of PQ3.


Is "Secure by Design" Failing?

The threat landscape around new Common Vulnerabilities and Exposures (CVEs) is one that every organization should take seriously. With a record-breaking 28,092 new CVEs published in 2023, bad actors are simply waiting to be handed easy footholds into their target organizations, and they don't have to wait long. Research from Qualys showed that three quarters of CVEs are exploited by attackers within just 19 days of their publication. And yet, organizations are failing to equip their DevOps teams with the secure coding skills and knowledge they need to eliminate vulnerabilities in the first place. Despite 47% of organizations blaming skills shortages for their vulnerability remediation failures, only 36% have their developers learn to write secure code. ... Firstly, developers need to understand the role they play in securing overall application development. This begins with writing more secure code, but this knowledge is also essential in code reviews. As developers write faster, or even leverage generative AI and open-source code to deliver quicker applications, being able to properly review and remediate insecure code becomes crucial.



Quote for the day:

"Great achievers are driven, not so much by the pursuit of success, but by the fear of failure." -- Larry Ellison

Daily Tech Digest - February 13, 2023

Mergers and Acquisitions in Healthcare: The Security Risks

Incidents such as the CommonSpirit ransomware attack highlight the critical importance for entities to carefully assess and address potential IT security risks involving a potential merger or acquisition, experts say. "We are seeing that well-established health systems or entities that have very mature cybersecurity programs take on an entity which is less secure," says John Riggi, national adviser for cybersecurity and risk at the American Hospital Association. The association advises hospital mergers to treat cyber risk with the same priority as financial analysis in a merger. But determining and identifying the array of systems and myriad of devices used by another healthcare entity that's being acquired is not easy. "When you buy an organization, you typically don't know everything you're buying," says Kathy Hughes, CISO of New York-based Northwell Health, which has 21 hospitals and over 550 outpatient facilities, many of which were acquired by the organization, which is the result of a 1997 merger between North Shore Health System and Long Island Jewish Medical Center.


Forget ChatGPT vs Bard, The Real Battle is GPUs vs TPUs

Solving for efficient matrix multiplication can cut down on the amount of compute resources required for training and inferencing tasks. While other methods like quantisation and model shrinking have also proven to cut down on compute, they sacrifice on accuracy. For a tech giant creating a state-of-the-art model, they’d rather spend the $5 million, if there’s no way to cut costs.  ... NVIDIA’s GPUs were well-suited to matrix multiplication tasks due to their hardware architecture, as they were able to effectively parallelise across multiple CUDA cores. Training models on GPUs became the status quo for deep learning in 2012, and the industry has never looked back. Building on this, Google also launched the first version of the tensor processing unit (TPU) in 2016, which contains custom ASICs (application-specific integrated circuits) optimised for tensor calculations. In addition to this optimisation, TPUs also work extremely well with Google’s TensorFlow framework; the tool of choice for machine learning engineers at the company.


As Digital Trade Expands, Data Governance Fragments

The upshot is that we are still far from any more global efforts. Even preliminary convergence on national laws about data protection and privacy between the United States and the European Union is difficult to achieve. Instead, Aaronson advocated for the establishment of a new international organization that could provide proper incentives to, and pay, global firms to share data. Overall, the panellists urged that technical discussions of data flows, data governance and rules for digital trade be contextualized within fundamental concerns about the nature of data and the role of human rights. These concerns equally require attention and governance. The discussion on effective digital governance requires a fundamental rethink of the nature of data. As emphasized by panellist Kyung Sin Park, data embeds fundamental human freedoms and human information. It is closely linked to human rights. Data is much more than an economic asset used in training artificial intelligence (AI) algorithms.


Fall in Love with the Problem, Not the Solution: A Handbook for Entrepreneurs

Think of a problem—a big problem, something worth solving, something that would make the world a better place. Ask yourself, who has this problem? If you happen to be the only person on the planet with this problem, then go to a shrink. It’s much cheaper and easier than building a startup. But if a lot of people have this problem, go and speak with those people to understand their perception of the problem. Know the reality, and only then start building the solution. If you follow this path and your solution works, it’s guaranteed to create value. But there is a more important part to this. Imagine speaking with people and their feedback is, yeah, go ahead and solve that for me—this is a big problem. All of a sudden you feel committed to this journey. You essentially fall in love with the problem. Falling in love with the problem dramatically increases your likelihood of being successful because the problem becomes the north star of your journey, keeping you focused.


Data Mobility Framework: Expert Offers Four Keys to Know

It’s common for hybrid work teams to schedule when employees will be in the office and when they’ll work remotely. But while remote workers don’t always work from the same home office, they do expect similar access to business data and applications regardless of the network or device they’re using—and all of this remote connectivity has a material impact on data storage demands. Organizations try to balance data storage initiatives to address this without causing downtime to mission-critical applications and data. The faster organizations can add new storage or move data non-disruptively to another location, the better services they can deliver to end-users. Thankfully, the right data migration partner can perform these critical services non-disruptively in a matter of hours. This enables the organization and its partners to access a range of capabilities to minimize data migration efforts, including being able to migrate “hot data” to a new, more powerful array without downtime. Hot data is any data that is in constant demand, such as a database or application that’s essential for your business to operate.


Stop Suffocating Success! 7 Ways Established Businesses Can Start Thinking Like a Startup.

Startups aren't trapped by old rules—they're in the process of inventing themselves. Obviously, established companies can't just completely throw out the rulebook. But remember rules should exist to help, not just because they've always been there. Otherwise, people wind up blindly following often annoying processes without thinking about the end goal. For example, if multiple clients ask for a product feature that hasn't been included, but there isn't a feature review meeting until the next quarter, does it make sense to follow the rules and wait? Or should staff be empowered to add the feature (or, at least, fast-track a product review)? Beware of any policy that exists because "We've-always-done-things-this-way." ... Incompetent workers can take a terrible toll. To start, everything's harder when the people around you don't carry their weight. It's also demoralizing—you're working so hard and hitting all your goals, while the person next to you fails spectacularly and apparently isn't penalized for it. Over time, you're likely to grow bitter or just stop trying so hard since results clearly don't matter.


The Stubborn Immaturity of Edge Computing

Of course, they don’t even think of it as “the edge”. To them, it’s where real work takes place. So when IT vendors and cloud providers and carriers talk about the “far edge” (where real customers and real factories and real work takes place), that makes no sense to people outside of IT vendors’ data-center-centric bubble. The real world doesn’t revolve around the data center, or the cloud. What’s really far in the real world? The cloud. The data center. Edge computing is a technology style that’s part of a digital transformation trend. Digital transformation has been on a march for decades, well before we called it that. It’s accelerated because of cloud computing, and global connectivity. A lot of the technology transformation has been taking place at the back-end. In data centers, in business models. And there’s a lot left to be done. But the true green field in digital transformation is where people and things and factories actually exist. (OK, we’ll call that the “edge”, but that’s such an old IT-centric way of talking!)


How the Future of Work Will Be Shaped by No Code AI

No-code, like other breakthroughs, is a thrilling disruption and improvement in the software development process, particularly for small firms. Among its various applications, no-code has enabled users with little technical experience to create applications using pre-built frameworks and templates, which will undoubtedly lead to further inventions and design and development in the digital town square. It also cuts down on software development time, allowing for faster implementation of business solutions. Aside from the time saved, no-code can enhance computer and human resources by transferring these duties to software suppliers. ... No-code is also a game changer for many AI technology developers and non-technical people since it focuses on something we never imagined possible in the difficult field of artificial intelligence: simplicity. Anyone will be able to swiftly build AI apps using no-code development platforms, which provide a visual, code-free, and easy-to-use interface for deploying AI and machine learning models.


Code Readability vs Performance: Here is The Verdict

Code performance is critical, especially when working on projects that require high-speed computation and real-time processing. This can result in slow and sluggish user experiences. But focusing on the performance of a code that is not readable is useless. Moreover it can also be prone to bugs and errors. Performance is a quirky thing. Starting to write a code with performance as the first priority is not a path that any developer would take, or even recommend. In a Reddit thread, a developer gives an example of a code that compiles in 1 millisecond, and the other code in 0.1 millisecond. No one can really notice the difference between both the models as long as the code is “fast enough”. So improving the performance and focusing on it, while sacrificing the readability of the code can be counterproductive. Moreover, in the same Reddit thread, another developer pointed out that writing faster algorithms actually requires you to write harder code oftentimes, which again sacrifices the readability. 


LockBit Group Goes From Denial to Bargaining Over Royal Mail

LockBit's about-face - "it wasn't us" to "it was us" - is a reminder that ransomware groups will continue to lie, cheat and steal, so long as they can profit at a victim's expense. Isn't hitting a piece of Britain's critical national infrastructure - as in, the national postal service - risky? After DarkSide hit Colonial Pipeline in the United States in May 2021, for example, the group first blamed an affiliate before shutting down its operations and later rebooting under a different name. While hitting CNI might seem like playing with fire, many security experts' consensus is that ransomware groups' target selection remains opportunistic. Both operators and any affiliates who use their malware, as well as the initial access brokers from whom they often buy ready-made access to victims' networks, seem to snare whoever they can catch and then perhaps prioritize victims based on size and industry. What's notable isn't necessarily that LockBit - or one of its affiliates - hit Royal Mail, but that it decided to press the attack. 



Quote for the day:


“None of us can afford to play small anymore. The time to step up and lead is now.” -- Claudio Toyama