Showing posts with label digital certificate. Show all posts
Showing posts with label digital certificate. Show all posts

Daily Tech Digest - October 02, 2025


Quote for the day:

"Success is the progressive realization of predetermined, worthwhile, personal goals." -- Paul J. Meyer


AI cost overruns are adding up — with major implications for CIOs

Many organizations appear to be “flying blind” while deploying AI, adds John Pettit, CTO at Google Workspace professional services firm Promevo. If a CIO-led AI project misses budget by a huge margin, it reflects on the CIO’s credibility, he adds. “Trust is your most important currency when leading projects and organizations,” he says. “If your AI initiative costs 50% more than forecast, the CFO and board will hesitate before approving the next one.” ... Beyond creating distrust in IT leadership, missed cost estimates also hurt the company’s bottom line, notes Farai Alleyne, SVP of IT operations at accounts payable software vendor Billtrust. “It is not just an IT spending issue, but it could materialize into an overall business financials issue,” he says. ... enterprise leaders often assume AI coding assistants or no-code/low-code tools can take care of most of the software development needed to roll out a new AI tool. These tools can be used to create small prototypes, but for enterprise-grade integrations or multi-agent systems, the complexity creates additional costs, he says. ... In addition, organizations often underestimate the cost of operating an AI project, he says. Token usage for vectorization and LLM calls can cost tens of thousands of dollars per month, but hosting your own models isn’t cheap, either, with on-premises infrastructure costs potentially running into the thousands of dollars per month.


AI-Powered Digital Transformation: A C-Suite Blueprint For The Future Of Business

At its core, digital transformation is a strategic endeavor, not a technological one. To succeed, it should be at the forefront of the organizational strategy. This means moving beyond simply automating existing processes and instead asking how AI enables new ways of creating value. The shift is from operational efficiency to business model innovation. ... True digital leaders possess a visionary mindset and the critical competencies to guide their teams through change. They must be more than tech-savvy; they must be emotionally intelligent and capable of inspiring trust. This demands an intentional effort to develop leaders who can bridge the gap between deep business acumen and digital fluency. ... With the strategic, cultural and data foundations in place, organizations can focus on building a scalable and secure digital infrastructure. This may involve adopting cloud computing to provide flexible resources needed for big data processing and AI model deployment. It can also mean investing in a range of complementary technologies that, when integrated, create a cohesive and intelligent ecosystem. ... Digital transformation is a complex, continuous journey, not a single destination. This framework provides a blueprint, but its success requires leadership. The challenge is not technological; it's a test of leadership, culture and strategic foresight.


Why Automation Fails Without the Right QA Mindset

Automation alone doesn’t guarantee quality — it is only as effective as the tests it is scripted to run. If the requirements are misunderstood, automated tests may pass while critical issues remain undetected. I have seen failures where teams relied solely on automation without involving proper QA practices, leading to tests that validated incorrect behavior. Automation frequently fails to detect new or unexpected issues introduced by system upgrades. It often misses critical problems such as faulty data mapping, incomplete user interface (UI) testing and gaps in test coverage due to outdated scripts. Lack of adaptability is another common obstacle that I’ve repeatedly seen undermine automation testing efforts. When UI elements are tightly coupled, even minor changes can disrupt test cases. With the right QA mindset, this challenge is anticipated — promoting modular, maintainable automation strategies capable of adapting to frequent UI and logic changes. Automation lacks the critical analysis required to validate business logic and perform true end-to-end testing. From my experience, the human QA mindset proved essential during the testing of a mortgage loan calculation system. While automation handled standard calculations and data validation, it could not assess whether the logic aligned with real-world lending rules.


Stop Feeding AI Junk: A Systematic Approach to Unstructured Data Ingestion

Worse, bad data reduces accuracy. Poor quality data not only adds noise, but it also leads to incorrect outputs that can erode trust in AI systems. The result is a double penalty: wasted money and poor performance. Enterprises must therefore treat data ingestion as a discipline in its own right, especially for unstructured data. Many current ingestion methods are blunt instruments. They connect to a data source and pull in everything, or they rely on copy-and-sync pipelines that treat all data as equal. These methods may be convenient, but they lack the intelligence to separate useful information from irrelevant clutter. Such approaches create bloated AI pipelines that are expensive to maintain and impossible to fine-tune. ... Once data is classified, the next step is to curate it. Not all data is equal. Some information may be outdated, irrelevant, or contradictory. Curating data means deliberately filtering for quality and relevance before ingestion. This ensures that only useful content is fed to AI systems, saving compute cycles and improving accuracy. This also ensures that RAG and LLM solutions can utilize their context windows on tokens for relevant data and not get cluttered up with irrelevant junk. ... Generic ingestion pipelines often lump all data into a central bucket. A better approach is to segment data based on specific AI use cases. 


Five critical API security flaws developers must avoid

Developers might assume that if an API endpoint isn’t publicly advertised, it’s inherently secure, a dangerous myth known as “security by obscurity.” This mistake manifests in a few critical ways: developers may use easily guessable API keys or leave critical endpoints entirely unprotected, allowing anyone to access them without proving their identity. ... You must treat all incoming data as untrusted, meaning all input must be validated on the server-side. Your developers should implement comprehensive server-side checks for data types, formats, lengths, and expected values. Instead of trying to block everything that is bad, it is more secure to define precisely what is allowed. Finally, before displaying or using any data that comes back from the API, ensure it is properly sanitized and escaped to prevent injection attacks from reaching end-users. ... Your teams must adhere to the “only what’s necessary” principle by designing API responses to return only the absolute minimum data required by the consuming application. For production environments, configure systems to suppress detailed error messages and stack traces, replacing them with generic errors while logging the specifics internally for your team. ... Your security strategy must incorporate rate limiting to apply strict controls on the number of requests a client can make within a given timeframe, whether tracked by IP address, authenticated user, or API key.


Disaster recovery and business continuity: How to create an effective plan

If your disaster recovery and business continuity plan has been gathering dust on the shelf, it’s time for a full rebuild from the ground up. Key components include strategies such as minimum viable business (MVB); emerging technologies such as AI and generative AI; and tactical processes and approaches such as integrated threat hunting, automated data discovery and classification, continuous backups, immutable data, and gamified tabletop testing exercises. Backup-as-a-service (BaaS) and disaster recovery-as-a-service (DRaaS) are also becoming more popular, as enterprises look to take advantage of the scalability, cloud storage options, and ease-of-use associated with the “as-a-service” model. ... Accenture’s Whelan says that rather than try to restore the entire business in the event of a disaster, a better approach might be to create a skeletal replica of the business, an MVB, that can be spun up immediately to keep mission-critical processes going while traditional backup and recovery efforts are under way. ... The two additional elements are: one offline, immutable, or air-gapped backup that will enable organizations to get back on their feet in the event of a ransomware attack, and a goal of zero errors. Immutable data is “the gold standard,” Whelan says, but there are complexities associated with proper implementation.


Building Intelligence into the Database Layer

At the core of this evolution is the simple architectural idea of the database as an active intelligence engine. Rather than simply recording and serving historical data, an intelligent database interprets incoming signals, transforms them in real-time, and triggers meaningful actions directly from within the database layer. From a developer’s perspective, it still looks like a database, but under the hood, it’s something more: a programmable, event-driven system designed to act on high-velocity data streams with intense precision in real-time. ... Built-in processing engines unlock features like anomaly detection, forecasting, downsampling, and alerting in true real-time. These embedded engines enable real-time computation directly inside the database. Instead of moving data to external systems for analysis or automation, developers can run logic where the data already lives. ... Active intelligence doesn’t just enable faster reactions; it opens the door to proactive strategies. By continuously analyzing streaming data and comparing it to historical trends, systems can anticipate issues before they escalate. For example, gradual changes in sensor behavior can signal the early stages of a failure, giving teams time to intervene. ... Developers need more than just storage and query, they need tools that think. Embedding intelligence into the database layer represents a shift toward active infrastructure: systems that monitor, analyze, and respond at the edge, in the cloud, and across distributed environments.


AI Cybersecurity Arms Race: Are Companies Ready?

Security operations centers were already overwhelmed before AI became mainstream. Human analysts, drowning in alerts, can’t possibly match the velocity of machine-generated threats. Detection tools, built on static signatures and rules, simply can’t keep up with attacks that mutate continuously. The vendor landscape isn’t much more reassuring. Every security company now claims its product is “AI-powered,” but too many of these features are black boxes, immature, or little more than marketing gloss. ... That doesn’t mean defenders are standing still. AI is beginning to reshape cybersecurity on the defensive side, too, and the potential is enormous. Anomaly detection, fueled by machine learning, is allowing organizations to spot unusual behavior across networks, endpoints, and cloud environments far faster than humans ever could. In security operations centers, agentic AI assistants are beginning to triage alerts, summarize incidents, and even kick off automated remediation workflows. ... The AI arms race isn’t something the CISO can handle alone; it belongs squarely in the boardroom. The challenge isn’t just technical — it’s strategic. Budgets must be allocated in ways that balance proven defenses with emerging AI tools that may not be perfect but are rapidly becoming necessary. Security teams must be retrained and upskilled to govern, tune, and trust AI systems. Policies need to evolve to address new risks such as AI model poisoning or unintended bias.


Agentic AI needs stronger digital certificates

The consensus among practitioners is that existing technologies can handle agentic AI – if, that is, organisations apply them correctly from the start. “Agentic AI fits into well-understood security best practices and paradigms, like zero trust,” Wetmore emphasises. “We have the technology available to us – the protocols and interfaces and infrastructure – to do this well, to automate provisioning of strong identities, to enforce policy, to validate least privilege access.” The key is approaching AI agents with security-by-design principles rather than bolting on protection as an afterthought. Sebastian Weir, executive partner and AI Practice Leader at IBM UK&I, sees this shift happening in his client conversations. ... Perhaps the most critical insight from security practitioners is that managing agentic AI isn’t primarily about new technology – it’s about governance and orchestration. The same platforms and protocols that enable modern DevOps and microservices can support AI agents, but only with proper oversight. “Your ability to scale is about how you create repeatable, controllable patterns in delivery,” Weir explains. “That’s where capabilities like orchestration frameworks come in – to create that common plane of provisioning agents anywhere in any platform and then governance layers to provide auditability and control.”


Learning from the Inevitable

Currently, too many organizations follow a “nuke and pave” approach to IR, opting to just reimage computers because they don’t have the people to properly extract the wisdom from an incident. In the short term, this is faster and cheaper but has a detrimental impact on protecting against future threats. When you refuse to learn from past mistakes, you are more prone to repeating them. Conversely, organizations may turn to outsourcing. Experts in managed security services and IR have realized consulting gives them a broader reach and impact over the problem — but none of these are long-term solutions. This kind of short-sighted IR creates a false sense of security. Organizations are solving the problem for the time being, but what about the future? Data breaches are going to happen, and reliance on reactive problem-solving creates a flimsy IR program that leaves an organization vulnerable to threats. ... Knowledge-sharing is the best way to go about this. Sharing key learnings from previous attacks is how these teams can grow and prevent future disasters. The problem is that while plenty of engineers agree they learn the most when something “breaks” and that incidents are a treasure trove of knowledge for security teams, these conversations are often restricted to need-to-know channels. Openness about incidents is the only way to really teach teams how to address them.

Daly Tech Digest - August 20, 2025


Quote for the day:

"Real difficulties can be overcome; it is only the imaginary ones that are unconquerable." -- Theodore N. Vail


Asian Orgs Shift Cybersecurity Requirements to Suppliers

Cybersecurity audits need to move away from a yearly or quarterly exercise to continuous evaluation, says Security Scorecard's Cobb. As part of that, organizations should look to work with their suppliers to build a relationship that can help both companies be more resilient, he says. "Maybe you do an on-site visit or maybe you do a specific evidence gathering with that supplier, especially if they're a critical supplier based on their grade," Cobb says. "That security rating is a great first step for assessment, and it also will lead into further discussions with that supplier around what things can you do better." And yes, artificial intelligence (AI) is making inroads into monitoring third-party risk profiles as well. Consultancy EY imagines a future where multiple automated agents track information about suppliers and when an event — whether cyber, geopolitical, or meteorological — affects one or more supply chains, will automatically develop plans to mitigate the risk. Pointing out the repeated supply chain shocks from the pandemic, geopolitics, and climate change, EY argues that an automated system is necessary to keep up. When a chemical spill or a cybersecurity breach affects a supplier in Southeast Asia, for example, the system would track the news, predict the impact on a company's supply, and suggest alternate sources, if needed, the EY report stated.


The successes and challenges of AI agents

To really get the benefits, businesses will need to redesign the way work is done. The agent should be placed at the center of the task, with people stepping in only when human judgment is required. There is also the issue of trust. If the agent is only giving suggestions, a person can check the results. But when the agent acts directly, the risks are higher. This is where safety rules, testing systems, and clear records become important. Right now, these systems are still being built. One unexpected problem is that agents often think they are done when they are not. Humans know when a task is finished. Agents sometimes miss that. ... Today, the real barrier goes beyond just technology. It is also how people think about agents. Some overestimate what they can do; others are hesitant to try them. The truth lies in the middle. Agents are strong with goal-based and repeatable tasks. They are not ready to replace deep human thinking yet. ... Still, the direction is clear. In the next two years, agents will become normal in customer support and software development. Writing code, checking it, and merging it will become faster. Agents will handle more of these steps with less need for back-and-forth. As this grows, companies may create new roles to manage agents, needing someone to track how they are used, make sure they follow rules, and measure how much value they bring. This role could be as common as a data officer in the future.


How To Prepare Your Platform For Agentic Commerce

APIs and MCP servers are inherently more agent-friendly but less ubiquitous than websites. They expose services in a structured, scalable way that's perfect for agent consumption. The tradeoff is that you must find a way to allow verified agents to get access to your APIs. This is where some payment processing protocols can help by allowing verified agents to get access credentials that leverage your existing authentication, rate-limiting and abuse-prevention mechanisms to ensure access doesn’t lead to spam or scraping. In many cases, the best path is a hybrid approach: Expand your existing website to allow agent-compatible access and checkout while building key capabilities for agent access via APIs or MCP servers. ... Agents work best with standardized checkouts instead of needing to dodge botblockers and captchas while filling out forms via screenscraping. They need an entirely programmatic checkout process. That means you must move beyond more brittle browser autofill and instead accept tokenized payments directly via API. These tokens can carry pre-authorized payment methods such as tokenized credit cards, digital wallets (e.g., Apple Pay and PayPal), stablecoins or on-chain assets and account-to-account transfers. When combined with identity tokens, these payment tokens allow agents to present a complete, scoped credential that you can inspect and charge instantly. Think Stripe Checkout but for AI.


AI agents alone can’t be trusted in verification

One of the biggest risks comes from what’s known as compounding errors. Even a very accurate AI system – for example, 95% – becomes far less reliable when it’s chained to a series of compounding and related decisions. By the fifth hypothetical step, accuracy would drop to 77% or less. Unlike human teams, these systems don’t raise flags or signal uncertainty. That’s what makes them so risky: when they fail, they tend to do so silently and exponentially. ... This opacity is particularly dangerous in the fight against fraud, which is only getting more advanced. In 2025, fraudsters aren’t using fake passports and bad Photoshop. They’re using AI-generated identities, videos, and documents that are nearly impossible to distinguish from the real thing. Tools like Google’s Veo 3 or open-source image generators allow anyone to produce high-quality synthetic content at scale. ... Responsible and effective use of AI means using multiple models to cross-check results to avoid the domino effect of one error feeding into the next. It means assigning human reviewers to the most sensitive or high-risk cases – especially when fraud tactics evolve faster than models can be retrained. And it means having clear escalation procedures and full audit trails that can stand up to regulatory scrutiny. This hybrid model offers the best of both worlds: the speed and scale of AI, combined with the judgment and flexibility of human experts. As fraud becomes more sophisticated, this balance will be essential. 


AI in the classroom is important for real-world skills, college professors say

The agents can flag unsupported claims in students’ writing and explain why evidence is needed and recommend the use of credible sources, Luke Behnke, vice president of product management at Grammarly, said in an interview. “Colleges recognize it’s their responsibility to prepare students for the workforce, and that now includes AI literacy,” Behnke said. Universities are also implementing AI in their own learning management systems and providing students and staff access to Google’s Gemini, Microsoft’s Copilot and OpenAI’s ChatGPT. ... Cuo asks students not to simply accept whatever results advanced genAI models spit out, as they may be riddled with factual errors and hallucinations. “Students need to select and read more by themselves to create something that people don’t recognize as an AI product,” Cuo said. Some professors are trying to mitigate AI use by altering coursework and assignments, while others prefer not to use it at all, said Paul Shovlin, an assistant professor of AI and digital rhetoric at Ohio University. But students have different requirements and use AI tools for personalized learning, collaboration, and writing, as well as for coursework workflow, Shovlin said. He stressed, however, that ethical considerations, rhetorical awareness, and transparency remain important in demonstrating appropriate use.


Automation Alert Sounds as Certificates Set to Expire Faster

Decreasing the validity time for a certificate offers multiple benefits. As previous certificate revocations have demonstrated, actually revoking every bad certificate in a timely manner, across the broad ecosystem, is a challenge. Having certificates simply expire more frequently helps address that. The CA/Browser Forum also expects an ancillary benefit of "increased consistency of quality, stability and availability of certificate lifecycle management components which enable automated issuance, replacement and rotation of certificates." While such automation won't fix every ill, the forum said that "it certainly helps." ... When it comes to getting the so-called cryptographic agility needed to manage both of those requirements, many organizations say they're not yet there. "While awareness is high, execution is lagging," says a new study from market researcher Omdia. "Many organizations know they need to act but lack clear roadmaps or the internal alignment to do so." ... For managing the much shorter certificate renewal timeframe, only 19% of surveyed organizations say they're "very prepared," with 40% saying they're somewhat prepared and another 40% saying they're not very prepared, and so far continue to rely on manual processes. "Historically, organizations have been able to get by with poor certificate hygiene because cryptography was largely static," said Tim Callan


AI Data Centers Are Coming for Your Land, Water and Power

"Think of them as AI factories." But as data centers grow in size and number, often drastically changing the landscape around them, questions are looming: What are the impacts on the neighborhoods and towns where they're being built? Do they help the local economy or put a dangerous strain on the electric grid and the environment? ... As fast as the AI companies are moving, they want to be able to move even faster. Smith, in that Commerce Committee hearing, lamented that the US government needed to "streamline the federal permitting process to accelerate growth." ... Even as big tech companies invest heavily in AI, they also continue to promote their sustainability goals. Amazon, for example, aims to reach net-zero carbon emissions by 2040. Google has the same goal but states it plans to reach it 10 years earlier, by 2030. With AI's rapid advancement, experts no longer know if those climate goals are attainable, and carbon emissions are still rising. "Wanting to grow your AI at that speed and at the same time meet your climate goals are not compatible," Good says. For its Louisiana data center, Meta has "pledged to match its electricity use with 100% clean and renewable energy" and plans to "restore more water than it consumes," the Louisiana Economic Development statement reads.


Slow and Steady Security: Lessons from the Tortoise and the Hare

In security, it seems that we are constantly confronted by the next shiny object, item du jour, and/or overhyped topic. Along with this seems to come an endless supply of “experts” ready to instill fear in us around the “revolutionized threat landscape” and the “new reality” we apparently now find ourselves in and must come to terms with. Indeed, there is certainly no shortage of distractions in our field. Some of us are likely aware of and conscious of the near-constant tendency for distraction in our field. So how can we avoid falling into the trap of succumbing to the temptation and running after every distraction that comes along? Or, to pose it another way, how can we appropriately invest our time and resources in areas where we are likely to see value and return on that investment? ... All successful security teams are governed by a solid security strategy. While the strategy can be adjusted from time to time as risks and threats evolve, it shouldn’t drift wildly and certainly not in an instant. If the newest thing demands radically altering the security strategy, it’s an indicator that it may be overblown. The good news is that a well-formed security strategy can be adapted to deal with just about anything new that arises in a steady and systematic way, provided that new thing is real.


IBM and Google say scalable quantum computers could arrive this decade

Most notable advances come from qubits built with superconducting circuits, as used in IBM and Google machines. These systems must operate near absolute zero and are notoriously hard to control. Other approaches use trapped ions, neutral atoms, or photons as qubits. While these approaches offer greater inherent stability, scaling up and integrating large numbers of qubits remains a formidable practical challenge. "The costs and technical challenges of trying to scale will probably show which are more practical," said Sebastian Weidt, chief executive at Universal Quantum, a startup developing trapped ions. Weidt emphasized that government support in the coming years could play a decisive role in determining which quantum technologies prove viable, ultimately limiting the field to a handful of companies capable of bringing a system to full scale. Widespread interest in quantum computing is attracting attention from both investors and government agencies. ... These next-generation technologies are still in their early stages, though proponents argue they could eventually surpass today's quantum machines. For now, industry leaders continue refining and scaling legacy architectures developed over years of lab research.


The 6 challenges your business will face in implementing MLSecOps

ML models are often “black boxes”, even to their creators, so there’s little visibility into how they arrive at answers. For security pros, this means limited ability to audit or verify behavior – traditionally a key aspect of cybersecurity. There are ways to circumnavigate this opacity of AI and ML systems: with Trusted Execution Environments (TEEs). These are secure enclaves in which organizations can test models repeatedly in a controlled ecosystem, creating attestation data. ... Models are not static and are shaped by the data they ingest. Thus, data poisoning is a constant threat for ML models that need to be retrained. Organizations must embed automated checks into the training process to enforce a continuously secure pipeline of data. Using information from the TEE and guidelines on how models should behave, AI and ML models can be assessed for integrity and accuracy each time they are given new information. ... Risk assessment frameworks that work for traditional software will not be applicable to the changeable nature of AI and ML programs. Traditional assessments fail to account for tradeoffs specific to ML, e.g., accuracy vs fairness, security vs explainability, or transparency vs efficiency. To navigate this difficulty, businesses must be evaluating models on a case-by-case basis, looking to their mission, use case and context to weigh their risks. 

Daily Tech Digest - July 23, 2025


Quote for the day:

“Our chief want is someone who will inspire us to be what we know we could be.” -- Ralph Waldo Emerson


AI in customer communication: the opportunities and risks SMBs can’t ignore

To build consumer trust, businesses must demonstrate that AI genuinely improves the customer experience, especially by enhancing the quality, relevance and reliability of communication. With concerns around data misuse and inaccuracy, businesses need to clearly explain how AI supports secure, accurate and personalized interactions, not just internally but in ways customers can understand and see. AI should be positioned as an enabler of human service, taking care of routine tasks so employees can focus on complex, sensitive or high-value customer needs. A key part of gaining long-term trust is transparency around data. Businesses must clearly communicate how customer information is handled securely and show that AI is being used responsibly and with care. This could include clearly labelling AI-generated communications such as emails or text messages, or proactively informing customers about what data is being used and for what purpose.  ... As conversations move beyond why AI should be used to how it must be used responsibly and effectively, companies have entered a make-or-break “audition phase” for AI. In customer communications, businesses can no longer afford to just talk about AI’s benefits, they must prove them by demonstrating how it enhances quality, security, and personalization.


The Expiring Trust Model: CISOs Must Rethink PKI in the Era of Short-Lived Certificates and Machine Identity

While the risk associated with certificates applies to all companies, it is a greater challenge for businesses operating in regulated sectors such as healthcare, where certificates must often be tied to national digital identity systems. In several countries, healthcare providers and services are now required to issue certificates bound to a National Health Identifier (NHI). These certificates are used for authentication, e-signature and encryption in health data exchanges and must adhere to complex issuance workflows, usage constraints and revocation processes mandated by government frameworks. Managing these certificates alongside public TLS certificates introduces operational complexity that few legacy PKI solutions were designed to handle in today’s dynamic and cloud-native environments. ... The urgency of this mandate is heightened by the impending cryptographic shift driven by the rise of quantum computing. Transitioning to post-quantum cryptography (PQC) will require organizations to implement new algorithms quickly and securely. Frequent certificate renewal cycles, which once seemed a burden, could now become a strategic advantage. When managed through automated and agile certificate lifecycle management, these renewals provide the flexibility to rapidly replace compromised keys, rotate certificate authorities or deploy quantum-safe algorithms as they become standardized.


The CISO code of conduct: Ditch the ego, lead for real

The problem doesn’t stop at vendor interactions. It shows up inside their teams, too. Many CISOs don’t build leadership pipelines; they build echo chambers. They hire people who won’t challenge them. They micromanage strategy. They hoard influence. And they act surprised when innovation dries up or when great people leave. As Jadee Hanson, CISO at Vanta, put it, “Ego builds walls. True leadership builds trust. The best CISOs know the difference.” That distinction matters, especially when your team’s success depends on your ability to listen, adapt, and share the stage. ... Security isn’t just a technical function anymore. It’s a leadership discipline. And that means we need more than frameworks and certifications; we need a shared understanding of how CISOs should show up. Internally, externally, in boardrooms, and in the broader community. That’s why I’m publishing this. Not because I have all the answers, but because the profession needs a new baseline. A new set of expectations. A standard we can hold ourselves, and each other, to. Not about compliance. About conduct. About how we lead. What follows is the CISO Code of Conduct. It’s not a checklist, but a mindset. If you recognize yourself in it, good. If you don’t, maybe it’s time to ask why. Either way, this is the bar. Let’s hold it. ... A lot of people in this space are trying to do the right thing. But there are also a lot of people hiding behind a title.


Phishing simulations: What works and what doesn’t

Researchers conducted a study on the real-world effectiveness of common phishing training methods. They found that the absolute difference in failure rates between trained and untrained users was small across various types of training content. However, we should take this with caution, as the study was conducted within a single healthcare organization and focused only on click rates as the measure of success or failure. It doesn’t capture the full picture. Matt Linton, Google’s security manager, said phishing tests are outdated and often cause more frustration among employees than actually improving their security habits. ... For any training program to work, you first need to understand your organization’s risk. Which employees are most at risk? What do they already know about phishing? Next, work closely with your IT or security teams to create phishing tests that match current threats. Tell employees what to expect. Explain why these tests matter and how they help stop problems. Don’t play the blame game. If someone fails a test, treat it as a chance to learn, not to punish. When you do this, employees are less likely to hide mistakes or avoid reporting phishing emails. When picking a vendor, focus on content and realistic simulations. The system should be easy to use and provide helpful reports.


Reclaiming Control: How Enterprises Can Fix Broken Security Operations

Asset management is critical to the success of the security operations function. In order to properly defend assets, I first and foremost need to know about them and be able to manage them. This includes applying policies, controls, and being able to identify assets and their locations when necessary, of course. With the move to hybrid and multi-cloud, asset management is much more difficult than it used to be. ... Visibility enables another key component of security operations – telemetry collection. Without the proper logging, eventing, and alerting, I can’t detect, investigate, analyze, respond to, and mitigate security incidents. Security operations simply cannot operate without telemetry, and the hybrid and multi-cloud world has made telemetry collection much more difficult than it used to be. ... If a security incident is serious enough, there will need to be a formal incident response. This will involve significant planning, coordination with a variety of stakeholders, regular communications, structured reporting, ongoing analysis, and a post-incident evaluation once the response is wrapped up. All of these steps are complicated by hybrid and multi-cloud environments, if not made impossible altogether. The security operations team will not be able to properly engage in incident response if they are lacking the above capabilities, and having a complex environment is not an excuse.


Legacy No More: How Generative AI is Powering the Next Wave of Application Modernization in India

Choosing the right approach to modernise your legacy systems is a task. Generative AI helps overcome the challenges faced in legacy systems and accelerates modernization. For example, it can be used to understand how legacy systems function through detailed business requirements. The resulting documents can be used to build new systems on the cloud in the second phase. This can make the process cheaper, too, and thus easier to get business cases approved. Additionally, generative AI can help create training documents for the current system if the organization wants to continue using its mainframes. In one example, generative AI might turn business models into microservices, API contracts, and database schemas ready for cloud-native inclusion. ... You need to have a holistic assessment of your existing system to implement generative AI effectively. Leaders must assess obsolete modules, interdependencies, data schemas, and throughput constraints to pinpoint high-impact targets and establish concrete modernization goals. Revamping legacy applications with generative AI starts with a clear understanding of the existing system. Organizations must conduct a thorough evaluation, mapping performance bottlenecks, obsolete modules, entanglements, and intricacies of the data flow, to create a modernization roadmap.


A Changing of the Guard in DevOps

Asimov, a newcomer in the space, is taking a novel approach — but addressing a challenge that’s as old as DevOps itself. According to the article, the team behind Asimov has zeroed in on a major time sink for developers: The cognitive load of understanding deployment environments and platform intricacies. ... What makes Asimov stand out is not just its AI capability but its user-centric focus. This isn’t another auto-coder. This is about easing the mental burden, helping engineers think less about YAML files and more about solving business problems. It’s a fresh coat of paint on a house we’ve been renovating for over a decade. ... Whether it’s a new player like Asimov or stalwarts like GitLab and Harness, the pattern is clear: AI is being applied to the same fundamental problems that have shaped DevOps from the beginning. The goals haven’t changed — faster cycles, fewer errors, happier teams — but the tools are evolving. Sure, there’s some real innovation here. Asimov’s knowledge-centric approach feels genuinely new. GitLab’s AI agents offer a logical evolution of their existing ecosystem. Harness’s plain-language chat interface lowers the barrier to entry. These aren’t just gimmicks. But the bigger story is the convergence. AI is no longer an outlier or an optional add-on — it’s becoming foundational. And as these solutions mature, we’re likely to see less hype and more impact.


Data Protection vs. Cyber Resilience: Mastering Both in a Complex IT Landscape

Traditional disaster recovery (DR) approaches designed for catastrophic events and natural disasters are still necessary today, but companies must implement a more security-event-oriented approach on top of that. Legacy approaches to disaster recovery are insufficient in an environment that is rife with cyberthreats as these approaches focus on infrastructure, neglecting application-level dependencies and validation processes. Further, threat actors have moved beyond interrupting services and now target data to poison, encrypt or exfiltrate it. ... Cyber resilience is now essential. With ransomware that can encrypt systems in minutes, the ability to recover quickly and effectively is a business imperative. Therefore, companies must develop an adaptive, layered strategy that evolves with emerging threats and aligns with their unique environment, infrastructure and risk tolerance. To effectively prepare for the next threat, technology leaders must balance technical sophistication with operational discipline as the best defence is not solely a hardened perimeter, it’s also having a recovery plan that works. Today, companies cannot afford to choose between data protection and cyber resilience, they must master both.


Anthropic researchers discover the weird AI problem: Why thinking longer makes models dumber

The findings challenge the prevailing industry wisdom that more computational resources devoted to reasoning will consistently improve AI performance. Major AI companies have invested heavily in “test-time compute” — allowing models more processing time to work through complex problems — as a key strategy for enhancing capabilities. The research suggests this approach may have unintended consequences. “While test-time compute scaling remains promising for improving model capabilities, it may inadvertently reinforce problematic reasoning patterns,” the authors conclude. For enterprise decision-makers, the implications are significant. Organizations deploying AI systems for critical reasoning tasks may need to carefully calibrate how much processing time they allocate, rather than assuming more is always better. ... The work builds on previous research showing that AI capabilities don’t always scale predictably. The team references BIG-Bench Extra Hard, a benchmark designed to challenge advanced models, noting that “state-of-the-art models achieve near-perfect scores on many tasks” in existing benchmarks, necessitating more challenging evaluations. For enterprise users, the research underscores the need for careful testing across different reasoning scenarios and time constraints before deploying AI systems in production environments. 


How to Advance from SOC Manager to CISO?

Strategic thinking demands a firm grip on the organization's core operations, particularly how it generates revenue and its key value streams. This perspective allows security professionals to align their efforts with business objectives, rather than operating in isolation. ... This is related to strategic thinking but emphasizes knowledge of risk management and finance. Security leaders must factor in financial impacts to justify security investments and manage risks effectively. Balancing security measures with user experience and system availability is another critical aspect. If security policies are too strict, productivity can suffer; if they're too permissive, the company can be exposed to threats. ... Effective communication is vital for translating technical details into language senior stakeholders can grasp and act upon. This means avoiding jargon and abbreviations to convey information in a simplistic manner that resonates with multiple stakeholders, including executives who may not have a deep technical background. Communicating the impact of security initiatives in clear, concise language ensures decisions are well-informed and support company goals. ... You will have to ensure technical services meet business requirements, particularly in managing service delivery, implementing change, and resolving issues. All of this is essential for a secure and efficient IT infrastructure.

Daily Tech Digest - August 01, 2024

These are the skills you need to get hired in tech

While soft skills are important, communicating them to a prospective employer can present a conundrum. Tina Wang, division vice president of human resources at ADP, said there are a few ways for job seekers to bring attention to their behavioral skills. It goes beyond just listing “strong work ethic” or “problem solving” on a resume, “though it’s good to add it there too,” she said. Job seekers can incorporate behavior skills in a track record of job experiences. ... An interview with a prospective employer is also a good time to introduce behavioral skills, but time is limited and job-seekers won’t likely be able to share all their demonstrated skills and experience. “Preparation will go a long way, so think through your talking points and what is important to share,” Wang said. “Think about a few applicable, real work experiences where you demonstrated these skills and sketch out how and when to bring them during the interview process.” References can also be an excellent way to highlight behavioral skills. Intangibles such as a strong work ethic or attention to detail might be something former managers, team members or peers identify. 


Ideal authentication solution boils down to using best tools to stop attacks

Given the shifting nature of work, with more employees working remotely, the variety of gaps in protection is manifold. Clunky authentication experiences mean users are often asked to sign in multiple times a day for different applications and accounts. “Users get extremely frustrated when this occurs, and they end up having resistance to adopting these authentication methods,” Anderson says. To improve the situation, organizations need to manage authentication scenarios in onboarding, session tokens to remember login – and the reality of username and password authentication still being used extensively throughout the security landscape, leaving vulnerabilities to fraud. “Passkeys are good for users because they simplify and streamline the actual authentication ceremony itself, where the user is actively involved,” Miller says. “It doesn’t necessarily decrease the number of times they have to authenticate but it does make it simpler and less taxing.” “They also have knock-on benefits of reducing the amount of information that leaks in the case of a database leak that can be used by an attacker. It shrinks the blast radius of account compromise.”


Should Today’s Developers Be More or Less Specialized?

“The need for specialists is not going to change. If anything, I expect it to increase,” says Hillion. “We still have a number of clients who rely on full-stack developers. I would say the general trend is towards businesses needing more specialized developers who have the right combination of technical skillsets and sector knowledge to deliver what is needed into the complex tech stack. There is significant demand for developers who specialize in particular industry sectors.” ... “Without basic knowledge, pursuing any specific development area is challenging,” says Ivanov. “That’s why starting by mastering basic technologies that someone is most proficient in, which helps them learn new things faster,” says Ivanov in an email interview. “However, core technologies should not be the end goal. It is also essential to stay up to date with technology trends and always continue using new technology.” Tasks that go beyond standard or general requirements need the involvement of specialists who have knowledge and experience in specific areas. For example, a project that requires complex algorithms or specific technologies will require a specialist with a deep understanding of them.


Between sustainability and risk: why CIOs are considering small language models

“In LLMs, the bulk of the data work is done statistically and then IT trains the model on specific topics to correct errors, giving it targeted quality data,” he says. “SLMs cost much less and require less data, but, precisely for this reason, the statistical calculation is less effective and, therefore, very high-quality data is needed, with substantial work by data scientists. Otherwise, with generic data, the model risks producing many errors.” Furthermore, SLMs are so promising and interesting for companies that even big tech offers and advertises them, like Google’s Gemma and Microsoft’s Phi-3. For this reason, according to Esposito, governance remains fundamental, within a model that should remain a closed system. “An SLM is easier to manage and becomes an important asset for the company in order to extract added value from AI,” he says. “Otherwise, with large models and open systems, you have to agree to share strategic company information with Google, Microsoft, and OpenAI. This is why I prefer to work with a system integrator that can develop customizations and provide a closed system, for internal use. 


Why geographical diversity is critical to build effective and safe AI tools

Geographical diversity is critical as organizations look to develop AI tools that can be adopted worldwide, according to Andrea Phua, senior director of the national AI group and director of the digital economy office at Singapore's Ministry of Digital Development and Information (MDDI). ... "The use of Gen AI has brought a new dimension to cyber threats. As AI becomes more accessible and sophisticated, threat actors will also become better at exploiting it," said CSA's chief executive and Commissioner of Cybersecurity David Koh. "As it is, AI already poses a formidable challenge for governments around the world [and] cybersecurity professionals would know that we are merely scratching the surface of gen AI's potential, both for legitimate applications and malicious uses," Koh said. He pointed to reports of AI-generated content, including deepfakes in video clips and memes, that have been used to sow discord and influence the outcome of national elections. At the same time, there are new opportunities for AI to be tapped to enhance cyber resilience and defense, he said. 


Cloud Migration Regrets: Should You Repatriate?

With increasing pressure to cut costs, many CTOs and CIOs are considering repatriating cloud workloads back on premises. As hard as it may seem, it’s important to think beyond just the cost. You must understand workload requirements to make sound decisions for each application. ... A lot of organizations have forgotten how much IT operations have changed since moving to the cloud. Cloud transformation meant revamping ITOps based on the chosen mix of Infrastructure-, Platform- or Software-as-a-Service (IaaS, PaaS or SaaS) services. Bringing applications back on premises strips away those service layers, and Ops teams may no longer be able or willing to accept the administrative and maintenance burden again. One final consideration before moving workloads off the cloud is security. I think security is one of the many advantages of cloud infrastructure. When businesses first started moving to the cloud, security was one of the biggest concerns. It turns out that cloud providers are better at security than you are. They can’t fix security holes in your software or other operator error scenarios, but a cloud infrastructure provides greater isolation if a breach does occur. 


Chess, AI & future of leadership

As computing power increases and its access cost reduces, AI will become the central force that drives all activities, including imagination! So, imagine the chessboard being AI-enabled. The board now has its intelligence with the ability to understand the context of the game to prompt the next set of moves. The difference between the board-level AI and the AI used by the player as her assistant is that the assistant knows the player’s psyche of defending or attacking, strengths and weaknesses of the player and her opponent, and factors these while offering suggestions. The two AIs may or may not be aligned in their suggestions since both may be accessing different references. Let’s activate the third dimension in chess – the pieces are also intelligent! They know their roles and those of the others. They too can think, strategise, and suggest. For instance, in a choice to move between the rook and the knight, the rook suggests the knight moves. The knight feels the Queen should move! This is the egalitarian version of chess! Does it feel real and practical? In the context of AI, there’s the Large Language Model, which processes data from a vast set of sources with a large number of constraints and rules. 


DigiCert validation bug sets up 83,267 SSL certs for revoking

One of the validation methods approved by the Certification Authority Browser Forum (CABF), whose guidelines provide best practices for securing internet transactions in browsers and other software, involves the customer adding a DNS CNAME record that includes a random value supplied by its certificate provider. The provider, in this case DigiCert, then does a DNS lookup and verifies that the random value is as provided, confirming that the customer controls the domain. The CABF requires that, in one format of the DNS CNAME entry, the random value be prefixed with an underscore, and DigiCert discovered that, in some cases, that character was not included, rendering the validation non-compliant. By CABF rules, those certificates must be revoked within 24 hours, with no exceptions. However, DigiCert said in an update to its status page Tuesday, and in an email to customers, “Unfortunately, some customers operating critical infrastructure are not in a position to have all their certificates reissued and deployed in time without critical service interruptions. To avoid disruption to critical services, we have engaged with browser representatives alongside these customers over the last several hours. ...”


Mind the Gap: Data Quality Is Not “Fit for Purpose”

When talking about data quality, we must therefore be clear about whose purpose, what requirements, established when, and by whom. Within the context of the DMBoK definition, the answer is that every consumer evaluates the quality of a data set independently. Data is considered to be of high quality when it is fit for my purpose, satisfies my requirements, established by me when I need the data. Data quality, defined in this way, is truly in the eye of the beholder. Furthermore, data quality analyses cannot be leveraged by new consumers. For decades, we in decision support have been selling the benefits of leveraging data across applications and analyses. It has been the fundamental justification for data warehouses, data lakes, data lakehouses, etc. But misalignment between the purpose for which data was created and the purpose for which it is being used may not be immediately apparent. Especially when the data is not well understood. The consequences are faulty models and erroneous analyses. We reflexively blame the quality of the data, but that’s not where the problem lies. This is not data quality. It is data fitness. 


Navigating Hope and Fear in a Socio-Technical Future

It is not about just spending more, that isnt really working, you must SPEND BETTER. I and other architects litterally train for decades to both cut costs and make great investment decisions. Technical debt acrual, technical health goals, technical strategy dont just deserve a seat at the table. They are becoming the table. A little more rationally, in all complex engineering fields, we are required to get signoff from legitimate professionals who have been measured against legitimate and hard-earned competencies. Not only does this create more stable outcomes, it actually saves and makes the economy money. Instead of ‘paying for two ok systems’, we pay for ‘one great one’. ... In all complex engineering ecosystems it is not just outputs and companies that are regulated. The role and skills of architects and engineers are not secret and they really aren’t that different by company. I believe I am the worlds expert on architecture skills or at least one of a dozen of them. I have interviewed and assessed hundreds of companies, and thousands of architects. It is time to begin licensing. And it must be handed to a real professional society. It cannot be a vendor consortium. 



Quote for the day:

"You’ll never achieve real success unless you like what you’re doing." -- Dale Carnegie

Daily Tech Digest - July 15, 2024

For databases that need to hold and query hundreds of petabytes to exabytes of data, there are only a few viable choices. Be very careful about evaluating and planning exabyte-scale databases, as they are huge commitments and hard to change once the data is loaded. ... Data consistency is usually “strong” for SQL databases, meaning that all reads return the latest data. Data consistency may be anything from “eventual” to “strong” for NoSQL databases. Eventual consistency offers lower latency, at the risk of reading stale data. Consistency is the “C” in the ACID properties required for validity in the event of errors, network partitions, and power failures. The four ACID properties are Atomicity, Consistency, Isolation, and Durability. ... Some databases are faster at reads and queries, and others are faster at writes. The mix of reads and writes you expect from your application is a useful number to include in your database selection criteria, and can guide your benchmarking efforts. The optimum choice of index type differs between read-heavy applications and write-heavy applications.


Can we please stop talking about replacing employees with AI?

With mundane, repetitive tasks taken over by AI, your team will have more room for creative, human work. Shift your organizational structure and job descriptions to open up room for the strategic, relationship-oriented work that AI just can’t do as well as a human. With less on employees’ to do lists, leaders and managers can shift their focus to challenging and developing them in other more valuable areas, such as building relationships and soft skills. AI can help here, too, but not in the way you may think. Rather than replacing humans, it can unlock insights (from data most companies already have) that lead to hiring and nurturing the right humans for the job. Software used to identify the important soft skills that lead to success in specific roles can be extremely valuable for both employers and employees. Ultimately, improvements in job performance and satisfaction will boost morale and your bottom line — a win-win. ... A good leader knows that for any organization to survive, people cannot be replaced. AI should instead help make employees exponentially more valuable to their companies by spending more time on creative productivity, and for companies to become more valuable to employees by improving their lives.


The Complexity of Solving Performance Problems

Our brains work in two different modes, particularly as we’re solving problems: intuitive and analytical. Intuitive is the mode that our brain is in when we’re not actively and effortfully thinking about something. It’s automatic. Analytical is the mode where we’re actually applying effort and working diligently in a focused manner in our brains. ... Nobody ever says, “Come fix my system,” without telling you what the problem is, right? They might just tell you that it’s slow, but there’s almost always something along with that. Maybe “It’s slow and I think there’s an IO problem” or “It’s slow and it looks like we’re pegging the CPU between two and three o’clock in the morning.” There’s always a little bit of information. That information gives us the ability to start theorizing early. That’s the main characteristic of this approach. ... With the methodical approach, we’re basically going dogmatically through the steps outlined above. And the key characteristic of this one is that we have the mental discipline to postpone that theorizing step until after we’ve gathered the data.


Liquid cooling solutions for high-density computing applications

One of the major challenges when designing data centers is anticipating end-of-life options and accurately predicting the nominal heat load from the beginning. This unpredictability often leads to a disconnect between projected and actual heat demands. For instance, a system designed for 1MW might only require 150kW in reality. Liquid cooling intensifies this challenge due to fluctuating cooling demands; as servers process large amounts of data, energy demand, and heat rejection peak, then drop drastically when idle. ... Liquid cooling offers significant sustainability gains and surpasses traditional air cooling methods in efficiency. By capturing heat directly at its source, liquid cooling allows operation at higher temperatures within the cooling loop. This facilitates the use of free cooling techniques, leveraging ambient air for cooling without resorting to additional energy-intensive processes. Moreover, the captured heat can be reused, further enhancing energy efficiency and sustainability.


What to do if certificates for passive authentication fail

If you have just one source and it becomes unavailable, your entire authentication process can be compromised. A wise solution would be to use a combination of sources to ensure higher service availability. At the government-to-government level, countries share certificates for passive authentication with each other through diplomatic channels, thus collecting their own database of trusted certificates. ... No matter which sources you trust to get certificates and master lists from—be it a central repository or each publisher directly—the responsibility remains yours. Organizations like the ICAO or BSI may facilitate the process, but they are not liable if something goes wrong. Also, understanding the certificate system and its expiration dates is crucial. You need to set up alerts for approaching expiration dates to ensure you have a fresh certificate ready to replace the old one. The most responsible organizations don’t rely solely on external sources. They collect certificates from various countries, create their own master list, sign it with their own certificate, and ensure that it’s valid.


Ignoring quantum threats in CBDC design is reckless

Relying on quantum mechanics phenomena, it far outpaces the ability of classical computers and supercomputers to solve complex algorithms in both speed and capability. Public-key cryptography is one of its primary targets. Thus, it’s alarming that developers of CBDCs are not prioritizing quantum-resistant designs in their early technical roadmaps. As RAND mathematician Alvin Moon noted, “A steady stream of advances in quantum computing technology is undeniably increasing the urgency to standardize and implement post-quantum algorithms, or algorithms which are resistant to both present-day computing and on-the-horizon quantum computing.” This security flaw represents a serious vulnerability in the US monetary system if central bank digital currencies are eventually adopted by the Federal Reserve and authorized by Congress. Valid concerns about surveillance and potential harm to civil liberties are driving debates and may positively influence laws, regulations and ethical governance standards. Technical designs continue to evolve, albeit slowly, to address privacy risks. However, without secure and resilient technical rails, CBDC adoption will remain a serious risk.


A CISO's Summary Of The Cyber Resilience Act

As a baseline, you need to make sure that whatever you’re building is as secure as is reasonable to be. Your product must have minimal attack surfaces and should be hardened. This means that by default it should not allow unauthorized access; its data is encrypted or protected; and its data and commands can’t be intercepted or manipulated. It also means that your product must keep working, even under a DoS attack, and that it mustn’t interrupt other devices, even when attacked with exploits. Additionally, your product needs to be able to monitor or log changes in the device, as well as receive security updates or rollbacks. This includes direct or remote updates, user notifications about updates and the ability to roll back updates or reset the product to a factory/default state. On top of that, you’ll also need to follow or provide documentation and information about your product, including a description of the design, development and vulnerability handling process. This would also include a formal SBOM, where you’d include more information on the components in your product and its potential vulnerabilities.


The Compelling Need for Low-Code in Legacy Application Modernization

With technology at the centre of India’s transformation initiatives, it is imperative that its base digital infrastructure undergoes transformation if the nation hopes to achieve its goals. However, in this quest to become a digital leader lies a huge challenge: legacy systems. Even today, in many organisations, legacy systems are still used to run daily processes and are a huge hurdle in the path of digital transformation. Legacy systems not only consume a significant portion of IT budgets but also make businesses incapable of adapting to fast-changing customer needs. ... There is another compelling method that has gained huge traction in recent times. Low-code visual software development, which helps developers accelerate the software development process using a friendly and intuitive drag-and-drop process. The low-code development approach offers organisations the ability to modernise their core systems without the cost and risk of traditional development approaches. Let us look at some of the common legacy systems that are prevalent in most organisations today and understand how low-code platforms can be used effectively. 


How To Establish A Culture Of Cyber-Resilience Within An Organization

Workers ought to be aware of how their activities affect the security posture of the company and should feel confident enough to serve as the first line of defense. Additionally essential is routinely upgrading training programs according to the most recent dangerous environment. Organizations may greatly lower their susceptibility to cyberattacks by encouraging a culture of ongoing education and awareness. ... Incorporate cybersecurity into all company operations to ensure consideration of security issues during each undertaking and decision-making process. This interconnectedness allows each department to understand its responsibility in safeguarding the company’s digital assets, leading to a holistic approach to security. One essential first step is to create security rules and processes that support company goals. Making sure cybersecurity needs are part of the planning and development phases of a project facilitates early risk identification and the application of suitable safeguards. ... More than simply putting security technology into place, an organization must develop a cyber-resilience culture. 


Mastering Long-Running Processes in Modern Architectures

It is important to note that long-running processes refer to the act of waiting, not algorithms running for extended periods. This waiting can be due to human actions, external responses, or intentionally allowing time to pass. These processes can take hours, days, weeks, or even longer. ... Besides business reasons for waiting, there are also technical ones, such as asynchronous communication delays, failures in message delivery, and the unavailability of peer services in distributed systems. If not addressed, these issues can lead to cascading failures. ... As soon as you start handling the complexities of distributed systems and long-running processes in payment processing, it is essential to embrace asynchronous communication in your API design. Although payments are usually quick and straightforward, situations like declined credit cards or unavailable services demand a different approach. We can build more adaptable and robust payment systems by designing systems that can handle immediate and delayed responses and using signals like a 202 HTTP code to indicate that processing will continue in the background.



Quote for the day:

"Your problem isn't the problem. Your reaction is the problem." -- Anonymous

Daily Tech Digest - March 22, 2021

Bitcoin’s Greatest Feature Is Also Its Existential Threat

The botnet’s designers are using this idea to create an unblockable means of coordination, but the implications are much greater. Imagine someone using this idea to evade government censorship. Most Bitcoin mining happens in China. What if someone added a bunch of Chinese-censored Falun Gong texts to the blockchain? What if someone added a type of political speech that Singapore routinely censors? Or cartoons that Disney holds the copyright to? In Bitcoin’s and most other public blockchains there are no central, trusted authorities. Anyone in the world can perform transactions or become a miner. Everyone is equal to the extent that they have the hardware and electricity to perform cryptographic computations. This openness is also a vulnerability, one that opens the door to asymmetric threats and small-time malicious actors. Anyone can put information in the one and only Bitcoin blockchain. Again, that’s how the system works. Over the last three decades, the world has witnessed the power of open networks: blockchains, social media, the very web itself. What makes them so powerful is that their value is related not just to the number of users, but the number of potential links between users.


India’s Quest Towards Quantum Supremacy

The digital partnership between the Indian Institute of Science Education and Research (IISER) at Pune and Finland’s Aalto University has created a high probability of getting its first quantum computer. ... Talking about the partnership, Neeta Bhushan, the joint secretary (Central Europe), external affairs ministry, stated that the idea of jointly developing a quantum computer with the use of AI and 5G technology is an important area of collaboration for both countries. Considering that Nokia and other Finnish companies are leading the world in mobile technology growth, this digital collaboration will witness the two countries collaborating on quantum technologies and computing. Hence, the partnership will have the leverage to deploy the latest technologies available with both countries. ... The partnership can lead us towards a new ecosystem altogether, and many things can be expected out of the same. The post-COVID changes in global power-sharing and the recent technological developments to handle the crisis have brought India to the centre stage. Consequently, quantum encryption is one of the basic applications derived from this collaboration.


Remote working still isn't perfect. These are the things that need fixing

A new report from O2 Business explores these insights in greater depth. The UK mobile operator surveyed 2,099 workers who had previously been office-based to understand how their needs and expectations of work had changed. It found that the majority of employees welcomed the notion of splitting their time between the office and home-working going forward, but also called for a closer alignment of operations, IT and HR in order to support individual work choices and maximize workplace productivity. Generally, employees are satisfied with their organization's response to the pandemic, O2 found: 69% of workers felt that their employers had supported them during the pandemic, with just 11% disagreeing with this statement. But less than two-thirds (65%) of employees felt confident that their organization was prepared for the future world of work. O2 said this indicated some businesses would struggle to adapt to the more flexible working arrangements that many are planning to adopt post-pandemic. The mad scramble to remote working has been one of the most trying aspects for businesses over the past year.


Fight microservices complexity with low-code development

A low-code platform takes care of nearly everything that conventionally is coded for an application. Most of the low-level programming and integration work is taken care of via tool configurations, which saves developers a lot of time and headaches. However, think carefully about where you apply low-code in a microservices architecture. As long as the app is simple, clean and doesn't require many integration points, low-code development might be the right alternative to more manual and complex microservices projects. Low-code builds are an easy choice for applications that don't need to integrate with other databases or only rely on a series of small tables. Short-lived conference apps or marketing promotions that run with user ID information are good examples of this. However, a low-code approach does not replace large-scale microservices development. Once you need to share information between applications in real time, the tools and programming techniques involved become much more sophisticated. While the low-code approach helps developers steer clear of over-engineering apps that don't need it, low-code likely won't provide the database integration, messaging or customization capabilities needed for an enterprise-level microservices architecture.


Edge Computing Growth Drives New Cybersecurity Concerns

Effectively protecting the edge means understanding how cybersecurity protection schemas work in an enterprise that uses not only edge computing, but also the cloud and traditional resources. Most enterprises are clearly focused on data security and application security, and are using tools such as web application firewalls (WAF), runtime application self-protection (RASP), data exfiltration protection and, of course, endpoint protection. Since the edge has the ability to “touch” data and applications, as well as use identity to connect and determine entitlements, a great deal of potentially sensitive information passes through the edge. Much, if not all of that traffic moves through a content delivery network (CDN), where hosts provide the connectivity and, hopefully, wrap encryption around that traffic to protect it from interception. However, intrusion and data exfiltration still happens. “Digital transformation is driving more and more applications to the edge, and with that movement, businesses are losing visibility into what is actually happening on the network, especially where edge operation occurs,” Hathaway said. “Gaining visibility allows cybersecurity professionals to get a better understanding of what is actually happening at the edge,” he said.


Move Your Automation Efforts From Pilot To Reality

Talent is another crucial part of the equation that not enough customers take into account. I’ve worked with many customers that don’t have dedicated automation centers of excellence, or specific in-house expertise to tackle automation the right way. An enterprise with multiple technologies in place must ensure that those technologies are communicating with each other. By bringing together technical experts, your processes can be better visualized and monitored end-to-end across the organization, leading to a higher chance of success. The complexity and effort involved in this kind of endeavour can be off-putting, but it’s worth the reward. Nor is it truly as complicated as it sounds — execution management systems, for example, already bring together technologies like process mining, automation and AI into a seamless, intelligent execution layer. Bring in or train the right people to champion it, and you’ve got a headstart on the next step of the journey. So while many companies haven’t been able to bring the full promise of automation to bear at scale just yet, that promise is getting closer to becoming a reality every day.


HowTo: Optimize Certificate Management to Identify and Control Risk

End-to-end certificate management gives businesses complete visibility and lifecycle control over any certificate in their environment, helping them reduce risk and control operational costs. Even in the most complex enterprise environments, certificate automation offers speed, flexibility and scale. Full visibility over all digital certificates and keys means that even the largest enterprises can have a centralized view of digital identities and security processes. Security leaders can then access expiration dates and maintain cryptographic strength while avoiding the time-consuming, demanding, and risky task of manually discovering, supervising, and renewing certificates. As organizations continue to grow and evolve, so does the range of certificates deployed and the set of people deploying them, which increases the potential for certificates to be installed in your environment that are out of sight of IT security teams and left unmanaged. To avoid being blindsided by these “rogue” certificates, enterprises are turning toward automated universal discovery.


On the Road to Good Cloud Security: Are We There Yet?

The research also uncovered a disconnect that raises the question: Is that confidence misplaced? When asked to rate the level of visibility the security team had into their organization's use of specific cloud service types, including software-as-a-service (SaaS), platform-as-a-service (PaaS), and infrastructure-as-a-service (IaaS), that same level of confidence faltered. For example, when asked to rate the security team's level of visibility into their organization's SaaS usage on a five-point scale, with 1 being the highest level, only 18% gave it a 1 and 27% gave it a 2. Visibility into PaaS and IaaS was rated as only slightly better. At the same time, respondents' knowledge of the shared responsibility model was found to be lacking. When asked to indicate whether the customer or cloud provider was responsible for securing a list of seven different elements that make up an IaaS account, around half of respondents gave the wrong answer. Specifically, 63% erroneously indicated that the cloud provider was responsible for securing virtual network connections, 55% erroneously indicated that the cloud provider was responsible for securing applications, and 50% got it wrong when they said the cloud provider was responsible for securing users who were accessing cloud data and applications.


5 AI-for-Industry Myths Debunked

Up until, and during, the AI hype in the nineties, artificial intelligence was a scientific discipline that almost exclusively dealt with data and algorithms. Over the past decades however, the field has matured, and AI has become an integral part of automated decisioning systems that are at the heart of what we do as individuals and organizations. Consequently, a large portion of AI research, development, and implementation encompasses people and processes. I remember having a business conversation with a large energy provider in which we were talking about automated systems and data-driven methods that, driven by customer data and smart meters, could enhance their customers’ experience. One hour into the meeting, they suddenly asked: “This all looks very promising, but shouldn’t we also do something with AI?” ... If you have the combined luck and skills, you can probably cook a decent meal with ingredients that come from a randomly filled refrigerator. The real question, however, is: “What do you want to achieve?” In the example of the refrigerator, it might occasionally be an effective solution if you need to quickly fill stomachs and don’t have time to go shopping. 


Cloudflare wants to be your corporate network backbone

With Magic WAN, Cloudflare aims to simplify that. Cloudflare's global Anycast network is already built for high performance and availability to serve its core CDN business. The company has data centers in more than 200 cities across over 100 countries with local peering at internet exchange points. Regardless of where branch offices or employees are located, chances are high they'll always connect to a server close to them and then the traffic will be routed through Cloudflare's private network efficiently benefiting from its performance optimizations, smart routing and security. With Magic WAN organizations only need to set up Anycast GRE tunnels from their offices or datacenters to Cloudflare and they can then define their private networks and routing rules in a central dashboard. Cloudflare's existing Argo Tunnel, Network Interconnect and soon IPsec can also be used to connect datacenters and VPCs to its network, while roaming employees will connect using Cloudflare WARP, a secure tunneling solution that's built around the highly performant Wireguard VPN protocol. This also solves the scalability and performance issues that organizations have faced with traditional VPN gateways and concentrators when they were suddenly faced with a large remote workforce due to the pandemic.



Quote for the day:

"A true dreamer is one who knows how to navigate in the dark" -- John Paul Warren