Showing posts with label architecture. Show all posts
Showing posts with label architecture. Show all posts

Daily Tech Digest - November 20, 2024

5 Steps To Cross the Operational Chasm in Incident Management

A siloed approach to incident management slows down decision-making and harms cross-team communication during incidents. Instead, organizations must cultivate a cross-functional culture where all team members are able to collaborate seamlessly. Cross-functional collaboration ensures that incident response plans are comprehensive and account for the insights and expertise contained within specific teams. This communication can be expedited with the support of AI tools to summarize information and draft messages, as well as the use of automation for sharing regular updates. ... An important step in developing a proactive incident management strategy is conducting post-incident reviews. When incidents are resolved, teams are often so busy that they are forced to move on without examining the contributing factors or identifying where processes can be improved. Conducting blameless reviews after significant incidents — and ideally every incident — is crucial for continuously and iteratively improving the systems in which incidents occur. This should cover both the technological and human aspects. Reviews must be thorough and uncover process flaws, training gaps or system vulnerabilities to improve incident management.


How to transform your architecture review board

A modernized approach to architecture review boards should start with establishing a partnership, building trust, and seeking collaboration between business leaders, devops teams, and compliance functions. Everyone in the organization uses technology, and many leverage platforms that extend the boundaries of architecture. Winbush suggests that devops teams must also extend their collaboration to include enterprise architects and review boards. “Don’t see ARBs as roadblocks, and treat them as a trusted team that provides much-needed insight to protect the team and the business,” he suggests. ... “Architectural review boards remain important in agile environments but must evolve beyond manual processes, such as interviews with practitioners and conventional tools that hinder engineering velocity,” says Moti Rafalin, CEO and co-founder of vFunction. “To improve development and support innovation, ARBs should embrace AI-driven tools to visualize, document, and analyze architecture in real-time, streamline routine tasks, and govern app development to reduce complexity.” ... “Architectural observability and governance represent a paradigm shift, enabling proactive management of architecture and allowing architects to set guardrails for development to prevent microservices sprawl and resulting complexity,” adds Rafalin.


Business Internet Security: Everything You Need to Consider

Each device on your business’s network, from computers to mobile phones, represents a potential point of entry for hackers. Treat connected devices as a door to your Wi-Fi networks, ensuring each one is secure enough to protect the entire structure. ... Software updates often include vital security patches that address identified vulnerabilities. Delaying updates on your security software is like ignoring a leaky roof; if left unattended, it will only get worse. Patch management and regularly updating all software on all your devices, including antivirus software and operating systems, will minimize the risk of exploitation. ... With cyber threats continuing to evolve and become more sophisticated, businesses can never be complacent about internet security and protecting their private network and data. Taking proactive steps toward securing your digital infrastructure and safeguarding sensitive data is a critical business decision. Prioritizing robust internet security measures safeguards your small business and ensures you’re well-equipped to face whatever kind of threat may come your way. While implementing these security measures may seem daunting, partnering with the right internet service provider like Optimum can give you a head start on your cybersecurity journey.


How Google Cloud’s Information Security Chief Is Preparing For AI Attackers

To build out his team, Venables added key veterans of the security industry, including Taylor Lehmann, who led security engineering teams for the Americas at Amazon Web Services, and MK Palmore, a former FBI agent and field security officer at Palo Alto Networks. “You need to have folks on board who understand that security narrative and can go toe-to-toe and explain it to CIOs and CISOs,” Palmore told Forbes. “Our team specializes in having those conversations, those workshops, those direct interactions with customers.” ... Generally, a “CISO is going to meet with a very small subset of their clients,” said Charlie Winckless, senior director analyst on Gartner's Digital Workplace Security team. “But the ability to generate guidance on using Google Cloud from the office of the CISO, and make that widely available, is incredibly important.” Google is trying to do just that. Last summer, Venables co-led the development of Google’s Secure AI Framework, or SAIF, a set of guidelines and best practices for security professionals to safeguard their AI initiatives. It’s based on six core principles, including making sure organizations have automated defense tools to keep pace with new and existing security threats, and putting policies in place that make it faster for companies to get user feedback on newly deployed AI tools.


11 ways to ensure IT-business alignment

A key way to facilitate alignment is to become agile enough to stay ahead of the curve, and be adaptive to change, Bragg advises. The CIO should also speak early when sensing a possible business course deviation. “A modern digital corporation requires IT to be a good partner in driving to the future rather than dwelling on a stable state.” IT leaders also need to be agile enough to drive and support change, communicate effectively, and be transparent about current projects and initiatives. ... To build strong ties, IT leaders must also listen to and learn from their business counterparts. “IT leaders can’t create a plan to enable business priorities in a vacuum,” Haddad explains. “It’s better to ask [business] leaders to share their plans, removing the guesswork around business needs and intentions.” ... When IT and the business fail to align, silos begin to form. “In these silos, there’s minimal interaction between parties, which leads to misaligned expectations and project failures because the IT actions do not match up with the company direction and roadmap,” Bronson says. “When companies employ a reactive rather than a proactive approach, the result is an IT function that’s more focused on putting out fires than being a value-add to the business.”


Edge Extending the Reach of the Data Center

Savings in communications can be achieved, and low-latency transactions can be realized if mini-data centers containing servers, storage and other edge equipment are located proximate to where users work. Industrial manufacturing is a prime example. In this case, a single server can run entire assembly lines and robotics without the need to tap into the central data center. Data that is relevant to the central data center can be sent later in a batch transaction at the end of a shift. ... Organizations are also choosing to co-locate IT in the cloud. This can reduce the cost of on-site hardware and software, although it does increase the cost of processing transactions and may introduce some latency into the transactions being processed. In both cases, there are overarching network management tools that enable IT to see, monitor and maintain network assets, data, and applications no matter where they are. ... Most IT departments are not at a point where they have all of their IT under a central management system, with the ability to see, tune, monitor and/or mitigate any event or activity anywhere. However, we are at a point where most CIOs recognize the necessity of funding and building a roadmap to this “uber management” network concept.


Orchestrator agents: Integration, human interaction, and enterprise knowledge at the core

“Effective orchestration agents support integrations with multiple enterprise systems, enabling them to pull data and execute actions across the organizations,” Zllbershot said. “This holistic approach provides the orchestration agent with a deep understanding of the business context, allowing for intelligent, contextual task management and prioritization.” For now, AI agents exist in islands within themselves. However, service providers like ServiceNow and Slack have begun integrating with other agents. ... Although AI agents are designed to go through workflows automatically, experts said it’s still important that the handoff between human employees and AI agents goes smoothly. The orchestration agent allows humans to see where the agents are in the workflow and lets the agent figure out its path to complete the task. “An ideal orchestration agent allows for visual definition of the process, has rich auditing capability, and can leverage its AI to make recommendations and guidance on the best actions. At the same time, it needs a data virtualization layer to ensure orchestration logic is separated from the complexity of back-end data stores,” said Pega’s Schuerman.


The Transformative Potential of Edge Computing

Edge computing devices like sensors continuously monitor the car’s performance, sending data back to the cloud for real-time analysis. This allows for early detection of potential issues, reducing the likelihood of breakdowns and enabling proactive maintenance. As a result, the vehicle is more reliable and efficient, with reduced downtime. Each sensor relies on a hyperconnected network that seamlessly integrates data-driven intelligence, real-time analytics, and insights through an edge-to-cloud continuum – an interconnected ecosystem spanning diverse cloud services and technologies across various environments. By processing data at the edge, within the vehicle, the amount of data transmitted to the cloud is reduced. ... No matter the industry, edge computing and cloud technology require a reliable, scalable, and global hyperconnected network – a digital fabric – to deliver operational and innovative benefits to businesses and create new value and experiences for customers. A digital fabric is pivotal in shaping the future of infrastructure. It ensures that businesses can leverage the full potential of edge and cloud technologies by supporting the anticipated surge in network traffic, meeting growing connectivity demands, and addressing complex security requirements.


The risks and rewards of penetration testing

It is impossible to predict how systems may react to penetration testing. As was the case with our customer, an unknow flaw or misconfiguration can lead to catastrophic results. Skilled penetration testers usually can anticipate such issues. However, even the best white hats are imperfect. It is better to discover these flaws during a controlled test, then during a data breach. While performing tests, keep IT support staff available to respond to disruptions. Furthermore, do not be alarmed if your penetration testing provider asks you to sign an agreement that releases them from any liability due to testing. ... Black hats will generally follow the path of least resistance to break into systems. This means they will use well-known vulnerabilities they are confident they can exploit. Some hackers are still using ancient vulnerabilities, such as SQL injection, which date back to 1995. They use these because they work. It is uncommon for black hats to use unknown or “zero-day” exploits. These are reserved for high-value targets, such as government, military, or critical infrastructure. It is not feasible for white hats to test every possible way to exploit a system. Rather, they should focus on a broad set of commonly used exploits. Lastly, not every vulnerability is dangerous.


How Data Breaches Erode Trust and What Companies Can Do

A data breach can prompt customers to lose trust in an organisation, compelling them to take their business to a competitor whose reputation remains intact. A breach can discourage partners from continuing their relationship with a company since partners and vendors often share each other’s data, which may now be perceived as an elevated risk not worth taking. Reputational damage can devalue publicly traded companies and scupper a funding round for a private company. The financial cost of reputational damage may not be immediately apparent, but its consequences can reverberate for months and even years. ... In order to optimise cybersecurity efforts, organisations must consider the vulnerabilities particular to them and their industry. For example, financial institutions, often the target of more involved patterns like system intrusion, must invest in advanced perimeter security and threat detection. With internal actors factoring so heavily in healthcare, hospitals must prioritise cybersecurity training and stricter access controls. Major retailers that can’t afford extended downtime from a DoS attack must have contingency plans in place, including disaster recovery.



Quote for the day:

"Leadership is a matter of having people look at you and gain confidence, seeing how you react. If you're in control, they're in control." -- Tom Landry

Daily Tech Digest - November 03, 2024

How AI-Powered Vertical SaaS Is Taking Over Traditional Enterprise SaaS

Enterprise decision-makers no longer care about the underlying technology itself—they care about what it delivers. They care about tangible outcomes like cost savings, operational efficiencies, and improved customer experiences. This shift in focus is causing companies to rethink their approach to enterprise software. ... Unlike traditional SaaS, which is built for broad use cases, vertical SaaS is deeply tailored to specific industries. By using AI, it can offer real-time insights, automation, and optimisations that solve problems unique to each sector. ... This hyper-targeted approach allows vertical SaaS to deliver tangible business outcomes rather than generic efficiencies. AI powers this shift by enabling platforms to adapt to industry-specific challenges, automate routine tasks, and provide insights at a scale and speed that was previously unattainable. Think of traditional SaaS like a Swiss Army knife — versatile, but not always the best tool for a specific task. vertical SaaS, however, is like a surgeon’s scalpel or a craftsman’s chisel — precisely designed for a specific job, delivering results with pinpoint accuracy and efficiency. What would you rather use for mission-critical work: a multi-tool that does everything adequately or an instrument built to perform one task perfectly?


Ending Microservices Chaos: How Architecture Governance Keeps Your Microservices on Track

With proper software architecture governance, you can reduce microservices complexity, ramp up developers faster, reduce MTTR, and improve the resiliency of your system, all while building a culture of intentionality. ... In addition to controlling the chaos of microservices with governance and observability, maintaining a high standard of security and code quality is essential. When working with distributed systems, the complexity of microservices — if left unchecked — can lead to vulnerabilities and technical debt. ... Tools from SonarSource — such as SonarLint or SonarQube — focus on continuous code quality and security. They help developers identify potential issues such as code smells, duplication, or even security risks like SQL injection. By integrating seamlessly with CI/CD pipelines, they ensure that every deployment follows strict security and code quality standards. The connection between code quality, application security, and architectural observability is clear. Poor code quality and unresolved vulnerabilities can lead to a fragile architecture that is prone to outages and security incidents. By proactively managing your code quality and security using these tools, you reduce the risk of microservices complexity spiraling out of control.


What is quiet leadership? Examples, traits & benefits

Quiet leadership is a leadership style defined by empathy, creativity, active listening, and attention to detail. It focuses on collaboration and communication instead of control. At its core is quiet confidence, not arrogance. Quiet leaders prefer to solve problems through teamwork and encouragement, not aggression. They are compassionate, understanding, open, and approachable. Most importantly, they earn their team’s respect instead of demanding it. ... Instead of criticizing yourself for not being an extroverted leader, embrace who you are. Don’t try to be someone you’re not. You might wonder if a quiet style can work because of leadership stereotypes. But in reality, it can be comforting to others. Build self-awareness and notice how you positively impact people. By accepting your unique leadership style, you’ll find what works best for you and your team. If you use your strengths, being a quiet leader can be a superpower. For example, quiet leaders are great listeners. Active listening is rare, so be proud if you have that skill. ... As a quiet leader, you’ll need to step outside your comfort zone at times. This can be exhausting, so make time to recharge and regain energy. 


From Code To Conscience: Humanities’ Role In Fintech’s Evolution

Reflecting on the day, it became clear that studying for a career in fintech—or any technology field—is not just about understanding mechanics; it’s about grasping the bigger picture and realizing the power of technology to serve people, not just profit. In a sector as influential as fintech, this balanced approach is crucial. A humanities background fosters exactly the kind of critical, thoughtful perspective that today’s technology fields demand. Combining technical knowledge with grounding in ethics, history, and critical problem-solving will be essential for tomorrow’s leaders, especially as fintech continues to shape societal norms and economic structures. The Pace of Fintech conference underscored how the intersection of AI, fintech, and the humanities is shaping a more thoughtful future for technology. Artificial intelligence, while transformative, requires a balance between innovation and ethics—an understanding of both its potential and its responsibilities. Humanities-trained thinkers bring crucial perspectives to this field, prompting questions about fairness, transparency, and societal impact that purely technical approaches may overlook.


Overcoming data inconsistency with a universal semantic layer

As if the data landscape weren’t complex enough, data architects began implementing semantic layers within data warehouses. Architects might think of the data assets they manage as the single source of truth for all use cases. However, that is not typically the case because millions of denormalized table structures are typically not “business-ready.” When semantic layers are embedded within various warehouses, data engineers must connect analytics use cases to data by designing and maintaining data pipelines with transforms that create “analytics-ready” data. ... What is needed is a universal semantic layer that defines all the metrics and metadata for all possible data experiences: visualization tools, customer-facing analytics, embedded analytics, and AI agents. With a universal semantic layer, everyone across the business agrees on a standard set of definitions for terms like “customer” and “lead,” as well as standard relationships among the data (standard business logic and definitions), so data teams can build one consistent semantic data model. A universal semantic layer sits on top of data warehouses, providing data semantics (context) to various data applications. It works seamlessly with transformation tools, allowing businesses to define metrics, prepare data models, and expose them to different BI and analytics tools.


Server accelerator architectures for a wide range of applications

The highest-performing architecture for AI performance is a system that allows the accelerators to communicate with each other without having to communicate back to the CPU. This type of system requires that the accelerators be mounted on their own baseboard with a high-speed switch on the baseboard itself. The initial communication that initializes the application that runs on the accelerators is over a PCIe path. When completed, the results are then also sent back to the CPU over PCIe. The CPU-to-accelerator communication should be limited, allowing the accelerators to communicate with each other over high-speed paths. A request from one accelerator is made directly or through a non-blocking switch (4 of them) and sent to the appropriate GPU. The performance of GPU to GPU is significantly higher than using the PCIe path, which allows for applications to use more than one GPU for an application without the need to interact with the CPU over the relatively slow PCIe lanes. ... A common and well-defined interface between CPUs and accelerators is to communicate over PCIe lanes. This architecture allows for various configurations in the server and the number of accelerators. 


AI Testing: More Coverage, Fewer Bugs, New Risks

The productivity gains from AI in testing are substantial. We now have a vast international bank that we have helped leverage our solution to such an extent it managed to increase test automation coverage across two of its websites (supporting around ten different languages), taking it from a mere forty percent to almost ninety percent in a matter of weeks. I believe this is an amazing achievement, not only because of the end results but also because working in an enterprise environment with its security and integrations can typically take forever. While traditional test automation might be limited to a single platform or language and the capacity of one person, AI-enhanced testing breaks these limitations. Testers can now create and execute tests on any platform (web, mobile, desktop), in multiple languages, and with the capacity of numerous testers. This amplifies testing capabilities and introduces a new level of flexibility and efficiency. ... Upskilling QA teams with AI brings the significant advantage of multilingual testing and 24/7 operation. In today’s global market, software products must often cater to diverse users, requiring testing in multiple languages. AI makes this possible without requiring testers to know each language, expanding the reach and usability of software products.


Why Great Leaders Embrace Broad Thinking — and How It Transforms Organizations

Broad thinking starts with employing three behaviors. First, spend time following your thoughts in an exploratory way rather than simply trying to find an answer or idea and moving on. Second, look at things from different angles and consider a wide range of options carefully before acting. Third, consistently consider the bigger picture and resist getting caught up in the smaller details. ... Companies want action. They don't want employees sitting around wringing their hands, frozen with indecision. They also don't want employees overanalyzing decisions to the point of inertia. Therefore, they often train employees to make decisions faster and more efficiently. However, decisions made for speed don't always make for great decisions. Especially seemingly simple ones that have larger downstream ramifications. ... Broad thinking considers the parts as being inseparable from the whole. The elephant parts are inseparable from the entire animal, just like the promotional campaign was inseparable from the other aspects of the organization it impacted. When you broaden your perspective, you also become more sensitive to subtleties of differentiation: how elements that are seemingly irrelevant, extraneous, or opposites can interconnect.


How Edge Computing Is Enhancing AI Solutions

Edge computing enhances the privacy and security of AI solutions by keeping sensitive data local rather than transmitting it to centralized cloud servers. Such an approach is most advantageous in industries such as managing and providing healthcare where privacy is of high value, especially in regards to patient information. By processing medical images or patient records at the edge, healthcare providers can ensure compliance with data protection regulations while still leveraging AI for improved diagnostics and treatment planning. Furthermore, edge AI minimizes the number of exposed data points that can be attacked through the networks by translating data tasks into localized subsets. ... As the volume of data generated by IoT devices continues to grow exponentially, transmitting all this information to the cloud for processing becomes increasingly impractical and expensive. This problem is solved in edge computing by sorting and analyzing data. This approach has dramatic effects in reducing the bandwidth required and the overall costs attached to it and in addition enhancing the system performance.


Why being in HR is getting tougher—and how to break through

The HR function lives in the friction between caring for the employee and caring for the organization. HR’s role is to represent the best interests of the organizations we work for and deliver care to employees for their end-to-end life cycle at those organizations. When you live in that friction, at times, you’re underdelivering that care to employees. At this moment—when employees’ needs are at an all-time high and organizations are struggling with costs and resetting around historical growth expectations—that gap is even wider than during less volatile times. There’s also an assumption that the employees’ interests and the company’s interests aren’t aligned—when many times they are. I have several tools to help people when they’re struggling. We can get a little bit caught up in the myths and expectations of people wanting too much, and that’s where the HR professional has to pull back and say, “This is what I can do, and it’s actually quite good.” ... Trust is hard earned but can go away in a second. And it can go away in a second because of HR but also, unfortunately, because of business leaders. 



Quote for the day:

"You can't be a leader if you can't influence others to act." -- Dale E. Zand

Daily Tech Digest - October 02, 2024

Breaking through AI data bottlenecks

One of the most significant bottlenecks in training specialized AI models is the scarcity of high-quality, domain-specific data. Building enterprise-grade AI requires increasing amounts of diverse, highly contextualized data, of which there are limited supplies. This scarcity, sometimes known as the “cold start” problem, is only growing as companies license their data and further segment the internet. For startups and leading AI teams building state-of-the-art generative AI products for specialized use cases, public data sets also offer capped value, due to their lack of specificity and timeliness. ... Synthesizing data not only increases the volume of training data but also enhances its diversity and relevance to specific problems. For instance, financial services companies are already using synthetic data to rapidly augment and diversify real-world training sets for more robust fraud detection — an effort that is supported by financial regulators like the UK’s Financial Conduct Authority. By using synthetic data, these companies can generate simulations of never-before-seen scenarios and gain safe access to proprietary data via digital sandboxes.


Five Common Misconceptions About Event-Driven Architecture

Event sourcing is an approach to persisting data within a service. Instead of writing the current state to the database, and updating that stored data when the state changes, you store an event for every state change. The state can then be restored by replaying the events. Event-driven architecture is about communication between services. A service publishes any changes in its subdomain it deems potentially interesting for others, and other services subscribe to these updates. These events are carriers of state and triggers of actions on the subscriber side. While these two patterns complement each other well, you can have either without the other. ... Just as you can use Kafka without being event-driven, you can build an event-driven architecture without Kafka. And I’m not only talking about “Kafka replacements”, i.e. other log-based message brokers. I don’t know why you’d want to, but you could use a store-and-forward message queue (like ActiveMQ or RabbitMQ) for your eventing. You could even do it without any messaging infrastructure at all, e.g. by implementing HTTP feeds. Just because you could, doesn’t mean you should! A log-based message broker is most likely the best approach for you, too, if you want an event-driven architecture.


Mostly AI’s synthetic text tool can unlock enterprise emails and conversations for AI training

Mostly AI provides enterprises with a platform to train their own AI generators that can produce synthetic data on the fly. The company started off by enabling the generation of structured tabular datasets, capturing nuances of transaction records, patient journeys and customer relationship management (CRM) databases. Now, as the next step, it is expanding to text data. While proprietary text datasets – like emails, chatbot conversations and support transcriptions – are collected on a large scale, they are difficult to use because of the inclusion of PII (like customer information), diversity gaps and structured data to some level. With the new synthetic text functionality on the Mostly AI platform, users can train an AI generator using any proprietary text they have and then deploy it to produce a cleansed synthetic version of the original data, free from PII or diversity gaps. ... The new feature, and its ability to unlock value from proprietary text without privacy concerns, makes it a lucrative offering for enterprises looking to strengthen their AI training efforts. The company claims training a text classifier on its platform’s synthetic text resulted in 35% performance enhancement as compared to data generated by prompting GPT-4o-mini.


Not Maintaining Data Quality Today Would Mean Garbage In, Disasters Out

Enterprises are increasingly data-driven and rely heavily on the collected data to make decisions, says Choudhary. Also, a decade ago, a single application stored all its data in a relational database for weekly reporting. Today, data is scattered across various sources including relational databases, third-party data stores, cloud environments, on-premise systems, and hybrid models, says Choudhary. This shift has made data management much more complex, as all of these sources need to be harmonized in one place. However, in the world of AI, both structured and unstructured data need to be of high quality. Choudhary states that not maintaining data quality in the AI age would lead to garbage in, disasters out. Highlighting the relationship between AI and data observability in enterprise settings, he says that given the role of both structured and unstructured data in enterprises, data observability will become more critical. ... However, AI also requires the unstructured business context, such as documents from wikis, emails, design documents, and business requirement documents (BRDs). He stresses that this unstructured data adds context to the factual information on which business models are built.


Three Evolving Cybersecurity Attack Strategies in OT Environments

Attackers are increasingly targeting supply chains, capitalizing on the trust between vendors and users to breach OT systems. This method offers a high return on investment, as compromising a single supplier can result in widespread breaches. The Dragonfly attacks, where attackers penetrated hundreds of OT systems by replacing legitimate software with Trojanized versions, exemplify this threat. ... Attack strategies are shifting from immediate exploitation to establishing persistent footholds within OT environments. Attackers now prefer to lie dormant, waiting for an opportune moment to strike, such as during economic instability or geopolitical events. This approach allows them to exploit unknown or unpatched vulnerabilities, as demonstrated by the Log4j and Pipedream attacks. ... Attackers are increasingly focused on collecting and storing encrypted data from OT environments for future exploitation, particularly with the impending advent of post-quantum computing. This poses a significant risk to current encryption methods, potentially allowing attackers to decrypt previously secure data. Manufacturers must implement additional protective layers and consider future-proofing their encryption strategies to safeguard data against these emerging threats.


Mitigating Cybersecurity Risk in Open-Source Software

Unsurprisingly, open-source software's lineage is complex. Whereas commercial software is typically designed, built and supported by one corporate entity, open-source code could be written by a developer, a well-resourced open-sourced community or a teenage whiz kid. Libraries containing all of this open-source code, procedures and scripts are extensive. They can contain libraries within libraries, each with its own family tree. A single open-source project may have thousands of lines of code from hundreds of authors which can make line-by-line code analysis impractical and may result in vulnerabilities slipping through the cracks. These challenges are further exacerbated by the fact that many libraries are stored on public repositories such as GitHub, which may be compromised by bad actors injecting malicious code into a component. Vulnerabilities can also be accidentally introduced by developers. Synopsys' OSSRA report found that 74% of the audited code bases had high-risk vulnerabilities. And don't forget patching, updates and security notifications that are standard practices from commercial suppliers but likely lacking (or far slower) in the world of open-source software. 


Will AI Middle Managers Be the Next Big Disruption?

Trust remains a critical barrier, with many companies double-checking AI outputs, especially in sensitive areas such as compliance. But as the use of explainable AI grows, offering transparent decision-making, companies may begin to relax their guard and fully integrate AI as a trusted part of the workforce. But despite its vast potential and transformative abilities, autonomous AI is unlikely to work without human supervision. AI lacks the emotional intelligence needed to navigate complex human relationships, and companies are often skeptical of assigning decision-making to AI tools. ... "One thing that won't change is that work is still centered around humans, so that people can bring their creativity, which is such an important human trait," said Fiona Cicconi, chief people officer, Google. Accenture's report highlights just that. Technology alone will not drive AI-driven growth. ... Having said that, managers will have to roll up their sleeves, upskill and adapt to AI and emerging technologies that benefit their teams and align with organizational objectives. To fully realize the potential of AI, businesses will need to prioritize human-AI collaboration.


Managing Risk: Is Your Data Center Insurance up to the Test?

E&O policies generally protect against liability to third parties for losses arising from the insured’s errors and omissions in performing “professional services.” ... Cyber coverage typically protects against a broad range of first-party losses and liability claims arising from various causes, including data breaches and other disclosures of non-public information. A data center that processes data owned by third parties plainly has liability exposure to such parties if their non-public information is disclosed as a result of the data center’s operations. But even if a data center is processing only its own company’s data, it still has liability exposure, including for disclosure of non-public information belonging to its customers and employees. Given the often-substantial costs of defending data breach claims, data center operators would be well-advised to (1) review their cyber policies carefully for exclusions or limitations that potentially could apply to their liability coverage under circumstances particular to their operations and (2) purchase cyber liability limits commensurate with the amount and sensitivity of non-public data in their possession.


Attribution as the foundation of developer trust

With the need for more trust in AI-generated content, it is critical to credit the author/subject matter expert and the larger community who created and curated the content shared by an LLM. This also ensures LLMs use the most relevant and up-to-date information and content, ultimately presenting the Rosetta Stone needed by a model to build trust in sources and resulting decisions. All of our OverflowAPI partners have enabled attribution through retrieval augmented generation (RAG). For those who may not be familiar with it, retrieval augmented generation is an AI framework that combines generative large language models (LLMs) with traditional information retrieval systems to update answers with the latest knowledge in real time (without requiring re-training models). This is because generative AI technologies are powerful but limited by what they “know” or “the data they have been trained on.” RAG helps solve this by pairing information retrieval with carefully designed system prompts that enable LLMs to provide relevant, contextual, and up-to-date information from an external source. In instances involving domain-specific knowledge, RAG can drastically improve the accuracy of an LLM's responses.


Measurement Challenges in AI Catastrophic Risk Governance and Safety Frameworks

The current definition of catastrophic events, focusing on "large-scale devastation... directly caused by an AI model," overlooks a critical aspect: indirect causation and salient contributing causes. Indirect causation refers to cases where AI plays a pivotal but not immediately apparent role. For instance, the development and deployment of advanced AI models could trigger an international AI arms race, becoming a salient contributor to increased geopolitical instability or conflict. A concrete example might be AI-enhanced cyber warfare capabilities leading to critical infrastructure failures across multiple countries. AI systems might also amplify existing systemic risks or introduce new vulnerabilities that become salient contributing causes to a catastrophic event. The current narrow scope of AI catastrophic events may lead to underestimating the full range of potential catastrophic outcomes associated with advanced AI models, particularly those arising from complex interactions between AI and other sociotechnical systems. This could include scenarios where AI exacerbates climate change through increased energy consumption or where AI-powered misinformation campaigns gradually lead to the breakdown of trust in democratic institutions and social order.



Quote for the day:

"Facing difficult circumstances does not determine who you are. They simply bring to light who you already were." -- Chris Rollins

Daily Tech Digest - September 25, 2024

When technical debt strikes the security stack

“Security professionals are not immune from acquiring their own technical debt. It comes through a lack of attention to periodic reviews and maintenance of security controls,” says Howard Taylor, CISO of Radware. “The basic rule is that security rapidly decreases if it is not continuously improved. The time will come when a security incident or audit will require an emergency collection of the debt.” ... The paradox of security technical debt is that many departments concurrently suffer from both solution debt that causes gaps in coverage or capabilities, as well as rampant tool sprawl that eats up budget and makes it difficult to effectively use tools. ... “Detection engineering is often a large source of technical debt: over the years, a great detection engineering team can produce many great detections, but the reliability of those detections can start to fade as the rest of the infrastructure changes,” he says. “Great detections become less reliable over time, the authors leave the company, and the detection starts to be ignored. This leads to waste of energy and very often cost.” ... Role sprawl is another common scenario that contributes significantly to security debt, says Piyush Pandey, CEO at Pathlock.


Google Announces New Gmail Security Move For Millions

From the Gmail perspective, the security advisor will include a security sandbox where all email attachments will be scanned for malicious software employing a virtual environment to safely analyze said files. Google said the tool can “delay message delivery, allow customization of scan rules, and automatically move suspicious messages to the spam folder.” Gmail also gets enhanced safe browsing which gives additional protection by scanning incoming messages for malicious content before it is actually delivered. ... A Google spokesperson told me that the AI Geminin app is to get enterprise-grade security protections in core services now. With availability from October 15, for customers running on a Workspace Business, Enterprise, or Frontline plan, Google said that “with all of the core Workspace security and privacy controls in place, companies have the tools to deploy AI securely, privately and responsibly in their organizations in the specific way that they want it.” The critical components of this security move include ensuring Gemini is subject to the same privacy, security, and compliance policies as the rest of the Workspace core services, such as Gmail and Dos. 


The Next Big Interconnect Technology Could Be Plastic

e-Tube technology is a new, scalable interconnect platform that uses radio wave transmission over a dielectric waveguide made of – drumroll – common plastic material such as low-density polyethylene (LDPE). While waveguide theory has been studied for many years, only a few organizations have applied the technology for mainstream data interconnect applications. Because copper and optical interconnects are historically entrenched technologies, most research has focused on extending copper life or improving energy and cost efficiency of optical solutions. But now there is a shift toward exploring the e-Tube option that delivers a combination of benefits that copper and optical cannot, including energy-efficiency, low latency, cost-efficiency and scalability to multi-terabit network speeds required in next-gen data centers. The key metrics for data center cabling are peak throughput, energy efficiency, low latency, long cable reach and cost that enables mass deployment. Across these metrics, e-Tube technology provides advantages compared to copper and optical technologies. Traditionally, copper-based interconnects have been considered an inexpensive and reliable choice for short-reach data center applications, such as top-of-rack switch connections. 


From Theory to Action: Building a Strong Cyber Risk Governance Framework

Setting your risk appetite is about more than just throwing a number out there. It’s about understanding the types of risks you face and translating them into specific, measurable risk tolerance statements. For example, “We’re willing to absorb up to $1 million in cyber losses annually but no more.” Once you have that in place, you’ll find decision-making becomes much more straightforward. ... If your current cybersecurity budget isn't sufficient to handle your stated risk appetite, you may need to adjust it. One of the best ways to determine if your budget aligns with your risk appetite is by using loss exceedance curves (LECs). These handy charts allow you to visualize the forecasted likelihood and impact of potential cyber events. They help you decide where to invest more in cybersecurity and perhaps where even to cut back. ... One thing that a lot of organizations miss in their cyber risk governance framework is the effective use of cyber insurance. Here's the trick: cyber insurance shouldn’t be used to cover routine losses. Doing so will only lead to increased premiums. Instead, it should be your safety net for the larger, more catastrophic incidents – the kinds that keep executives awake at night.


Is Prompt Engineering Dead? How To Scale Enterprise GenAI Adoption

If you pick a model that is a poor fit for your use case, it will not be good at determining the context of the question and will fail at retrieving a reference point for the response. In those situations, the lack of reference data needed for providing an accurate response contributes to a hallucination. While there are many situations where you would prefer the model to give no response at all rather than fabricate one, what happens if there is no exact answer available is that the model will take some data points that it thinks are contextually relevant to the query and return an inaccurate answer. ... To leverage LLMs effectively at an enterprise scale, businesses need to understand their limitations. Prompt engineering and RAG can improve accuracy, but LLMs must be tightly limited in domain knowledge and scope. Each LLM should be trained for a specific use case, using a specific dataset with data owners providing feedback. This ensures no chance of confusing the model with information from different domains. The training process for LLMs differs from traditional machine learning, requiring human oversight and quality assurance by data owners.


AI disruption in Fintech: The dawn of smarter financial solutions

Financial institutions face diverse fraud challenges, from identity theft to fund transfer scams. Manual analysis of countless daily transactions is impractical. AI-based systems are empowering Fintechs to analyze data, detect anomalies, and flag suspicious activities. AI is monitoring transactions, filtering spam, and identifying malware. It can recognise social engineering patterns and alert users to potential threats. While fraudsters also use AI for sophisticated scams, financial institutions can leverage AI to identify synthetic content and distinguish between trustworthy and untrustworthy information. ... AI is transforming fintech customer service, enhancing retention and loyalty. It provides personalised, consistent experiences across channels, anticipating needs and offering value-driven recommendations. AI-powered chatbots handle common queries efficiently, allowing human agents to focus on complex issues. This technology enables 24/7 support across various platforms, meeting customer expectations for instant access. AI analytics predict customer needs based on financial history, transaction patterns, and life events, enabling targeted, timely offers. 


CIOs Go Bold

In business, someone who is bold is an individual who exudes confidence and assertiveness and is business savvy. However, there is a fine line between being assertive and confident in a way that is admired and being perceived as overbearing and hard to work with. ... If your personal CIO goals include being bolder, the first step is for you to self-assess. Then, look around. You probably already know individuals in the organization or colleagues in the C-suite who are perceived as being bold shakers and movers. What did they do to acquire this reputation? ... To get results from the ideas you propose, the outcomes of your ideas must solve strategic goals and/or pain points in the business. Consequently, the first rule of thumb for CIOs is to think beyond the IT box. Instead, ask questions like how an IT solution can help solve a particular challenge for the business. Digitalization is a prime example. Early digitalization projects started out with missions such as eliminating paper by digitalizing information and making it more searchable and accessible. Unfortunately, being able to search and access data was hard to quantify in terms of business results. 


What does the Cyber Security and Resilience Bill mean for businesses?

The Bill aims to strengthen the UK’s cyber defences by ensuring that critical infrastructure and digital services are secure by protecting those services and supply chains. It’s expected to share common ground with NIS2 but there are also some elements that are notably absent. These differences could mean the Bill is not quite as burdensome as its European counterpart but equally, it runs the risk of making it not as effective. ... The problem now is that many businesses will be looking at both sets of regulations and scratching their heads in confusion. Should they assume that the Bill will follow the trajectory of NIS2 and make preparations accordingly or should they assume it will continue to take a lighter touch and one that may not even apply to them? There’s no doubt that NIS2 will introduce a significant compliance burden with one report suggesting it will cost upwards of 31.2bn euros per year. Then there’s the issue of those that will need to comply with both sets of regulations i.e. those entities that either supply to customers or have offices on the continent. They will be looking for the types of commonalities we’ve explored here in order to harmonise their compliance efforts and achieve economies of scale. 


3 Key Practices for Perfecting Cloud Native Architecture

As microservices proliferate, managing their communication becomes increasingly complex. Service meshes like Istio or Linkerd offer a solution by handling service discovery, load balancing, and secure communication between services. This allows developers to focus on building features rather than getting bogged down by the intricacies of inter-service communication. ... Failures are inevitable in cloud native environments. Designing microservices with fault isolation in mind helps prevent a single service failure from cascading throughout the entire system. By implementing circuit breakers and retry mechanisms, organizations can enhance the resilience of their architecture, ensuring that their systems remain robust even in the face of unexpected challenges. ... Traditional CI/CD pipelines often become bottlenecks during the build and testing phases. To overcome this, modern CI/CD tools that support parallel execution should be leveraged. ... Not every code change necessitates a complete rebuild of the entire application. Organizations can significantly speed up the pipeline while conserving resources by implementing incremental builds and tests, which only recompile and retest the modified portions of the codebase.


Copilots and low-code apps are creating a new 'vast attack surface' - 4 ways to fix that

"In traditional application development, apps are carefully built throughout the software development lifecycle, where each app is continuously planned, designed, implemented, measured, and analyzed," they explain. "In modern business application development, however, no such checks and balances exists and a new form of shadow IT emerges." Within the range of copilot solutions, "anyone can build and access powerful business apps and copilots that access, transfer, and store sensitive data and contribute to critical business operations with just a couple clicks of the mouse or use of natural language text prompts," the study cautions. "The velocity and magnitude of this new wave of application development creates a new and vast attack surface." Many enterprises encouraging copilot and low-code development are "not fully embracing that they need to contextualize and understand not only how many apps and copilots are being built, but also the business context such as what data the app interacts with, who it is intended for, and what business function it is meant to accomplish."



Quote for the day:

“Winners are not afraid of losing. But losers are. Failure is part of the process of success. People who avoid failure also avoid success.” -- Robert T. Kiyosaki

Daily Tech Digest - August 22, 2024

A Brief History of Data Ethics

The roots of the practice of data ethics can be traced back to the mid-20th century when concerns about privacy and confidentiality began to emerge alongside the growing use of computers for data processing. The development of automated data collection systems raised questions about who had access to personal information and how it could be misused. Early ethical discussions primarily revolved around protecting individual privacy rights and ensuring the responsible handling of sensitive data. One pivotal moment came with the enactment of the Fair Information Practice Principles (FIPPs) in the United States in the 1970s. These principles, which emphasized transparency, accountability, and user control over personal data, laid the groundwork for modern data protection laws and influenced ethical debates globally. ... Ethical guidelines such as those proposed by the European Union’s General Data Protection Regulation (GDPR) emphasize the importance of informed consent, limiting the collection of data to its intended use, and data minimization. All these concepts are part of an ethical approach to data and its usage. 


Collaborative AI in Building Architecture

As a design practice fascinated by the practical deployment of AI, we can’t help but be reminded of the early days of the personal computer, as this also had a high impact on the design of workplace. Back in the 1980s, most computers were giant, expensive mainframes that only large companies and universities could afford. But then, a few visionary companies started putting computers on desktops, from workplaces, to schools and finally homes. Suddenly, computing power was accessible to everyone but it needed different spaces. ... As with any powerful new tool, AI also brings with it profound challenges and responsibilities. One significant concern is the potential for AI to perpetuate or even amplify biases present in the data it is trained on, leading to unfair or discriminatory outcomes. AI bias is already prevalent and it is crucial we learn how to teach AI to discern bias. Not so easy. AI could also be used maliciously, e.g. to create deepfakes or spread misinformation. There are also legitimate concerns about the impact of AI on jobs and the workforce, but equally how it improves and inspires that workforce.


The Deeper Issues Surrounding Data Privacy

Corporate legal departments will continue to draft voluminous agreement contracts packed with fine print provisions and disclaimers. CIOs can’t avoid this, but they can make a case to clearly present to users of websites and services how and under what conditions data is collected and shared. Many companies are doing this—and are also providing "Opt Out" mechanisms for users who are uncomfortable with the corporate data privacy policy. That said, taking these steps can be easier said than done. There are the third-party agreements that upper management makes that include provisions for data sharing, and there is also the issue of data custody. For instance, if you choose to store some of your customer data on a cloud service and you no longer have direct custody of your data, and the cloud provider experiences a breach that comprises your data, whose fault is it? Once again, there are no ironclad legal or federal mandates that address this issue-but insurance companies do tackle it. “In a cloud environment, the data owner faces liability for losses resulting from a data breach, even if the security failures are the fault of the data holder (cloud provider),” says Transparity Insurance Services.


A survival guide for data privacy in the age of federal inaction

First, organizations should map or inventory their data to understand what they have. By mapping and inventorying data, organizations can better visualize, contextualize and prioritize risks. And, by knowing what data you have, not only can you manage current privacy compliance risks, but you can also be better prepared to respond to new requirements. As an example, those data maps can allow you to see the data flows you have in place where you are sharing data – a key to accurately reviewing your third-party risks. In addition to be able to prepare for existing, and new, privacy laws, it also allows organizations to be able to identify their data flows to minimize risk exposure or compromise by being able to better understand where you are distributing your data. Secondly, companies should think through how to operationalize priority areas to embed them in your business. This might be through training of privacy champions and adopting technology to automate privacy compliance obligations such as implementing an assessments program that allows you to better understand data-related impact.


The Struggle To Test Microservices Before Merging

End-to-end testing is really where the rubber meets the road, and we get the most reliable tests when sending in requests that actually hit all dependencies and services to form a correct response. Integration testing at the API or frontend level using real microservice dependencies offers substantial value. These tests assess real behaviors and interactions, providing a realistic view of the system’s functionality. Typically, such tests are run post-merge in a staging or pre-production environment, often referred to as end-to-end (E2E) testing. ... What we really want is a realistic environment that can be used by any developer, even at an early stage of working on a PR. Achieving the benefits of API and frontend-level testing pre-merge would save effort on writing and maintaining mocks while testing real system behaviors. This can be done using canary-style testing in a shared baseline environment, akin to canary rollouts but in a pre-production context. To clarify that concept: We want to try running a new version of code on a shared staging environment, where that experimental code won’t break staging for all the other development teams, the same way a canary deploy can go out, break in production and not take down the service for everyone.


Neurotechnology is becoming widespread in workplaces – and our brain data needs to be protected

Neurotechnology has long been used in the field of medicine. Perhaps the most successful and well known example are cochlear implants, which can restore hearing. But neurotechnology is now becoming increasingly widespread. It is also becoming more sophisticated. Earlier this year, tech billionaire Elon Musk’s firm Neuralink implanted the first human patient with one of its computer brain chips, known as “Telepathy”. These chips are designed to enable people to translate thoughts into action. More recently, Musk revealed a second human patient had one of his firm’s chips implanted in their brain. ... These concerns are heightened by a glaring gap in Australia’s current privacy laws – especially as they relate to employees. These laws govern how companies lawfully collect and use their employees’ personal information. However, they do not currently contain provisions that protect some of the most personal information of all: data from our brains. ... As the Australian government prepares to introduce sweeping reforms to privacy legislation this month, it should take heed of these international examples and address the serious privacy risks presented by neurotechnology used in workplaces.


I Said I Was Technically a CISO, Not a Technical CISO

Often a CISO will not come from a technical background, or their technical background is long in their career rearview mirror. Can a CISO be effective today without a technical background? And how do you keep up on your technical chops once you get the role? ... We often talk about the need for a CISO to serve as a bridge to the rest of the business, but a CISO’s role still needs to be grounded in technical proficiency, argues Jeff Hancock, who’s the CISO over at Access Point Technology in a recent LinkedIn post. Now, many CISOs come from a technical background, but it becomes hard to maintain once you’re in a CISO role. Geoff says that while no one can be a master in all technical disciplines, CISOs should make a goal of selecting a few to retain mastery of over a long-term plan. Now, Andy, I’ll say, does this reflect your experience? Is this a matter of credibility with the rest of the security team, or does a technical understanding allow a CISO to do their job better? As you were a CISO, how much of your technical skills were sort of staying intact?


API security starts with API discovery

Because APIs tend to change quickly, it’s essential to update the API inventory continuously. A manual change-control process can be used, but this is prone to breakdowns between the development and security teams. The best way to establish a continuous discovery process is to adopt a runtime monitoring system that discovers APIs from real user traffic, or to require the use of an API gateway, or both. These options yield better oversight of the development team than relying on manual notifications to the security team as API changes are made. ... Threats can arise from outside or inside the organization, via the supply chain, or by attackers who either sign up as paying customers, or take over valid user accounts to stage an attack. Perimeter security products tend to focus on the API request alone, but inspecting API requests and responses together gives insight into additional risks related to security, quality, conformance, and business operations. There are so many factors involved when considering API risks that reducing this to a single number is helpful, even if the scoring algorithm is relatively simple.


3 key strategies for mitigating non-human identity risks

The first step of any breach response activity is to understand if you’re actually impacted; the ability to quickly identify any impacted credentials associated with the third-party experiencing the incident is key. You need to be able to determine what the NHIs are connected to, who is utilizing them, and how to go about rotating them without disrupting critical business processes, or at least understand those implications prior to rotation. We know that in a security incident, speed is king. Being able to outpace attackers and cut down on response time through documented processes, visibility, and automation can be the difference between mitigating direct impact from a third-party breach, or being swept up in a list of organizations impacted due to their third-party relationships. ... When these factors change from baseline activity associated with NHIs they may be indicative of nefarious activity and warrant further investigation, or even remediation, if an attack or compromise is confirmed. Security teams are not only regularly stretched thin, but they also often lack a deep understanding across the organization’s entire application and third-party ecosystem as well as insights into what assigned permissions and associated usage is appropriate.


The Rising Cost of Digital Incidents: Understanding and Mitigating Outage Impact

Causal AI for DevOps promises a bridge between observability and automated digital incident response. By ‘Causal AI for DevOps’ I mean causal reasoning software that applies machine learning (ML) to automatically capture cause and effect relationships. Causal AI has the potential to help dev and ops teams better plan for changes to code, configurations or load patterns, so they can stay focused on achieving service-level and business objectives instead of firefighting. With Causal AI for DevOps, many of the incident response tasks that are currently manual can be automated: When service entities are degraded or failing and affecting other entities that makeup business services, causal reasoning software surfaces the relationship between the problem and the symptoms it is causing. The team with responsibility for the failing or degraded service is immediately notified so they can get to work resolving the problem. Some problems can be remediated automatically. Notifications can be sent to end users and other stakeholders, letting them know that their services are affected along with an explanation for why this occurred and when things will be back to normal. 



Quote for the day:

"Holding on to the unchangeable past is a waste of energy, and serves no purpose in creating a better future." -- Unknown

Daily Tech Digest - August 05, 2024

Faceoff: Auditable AI Versus the AI Blackbox Problem

“The notion of auditable AI extends beyond the principles of responsible AI, which focuses on making AI systems robust, explainable, ethical, and efficient. While these principles are essential, auditable AI goes a step further by providing the necessary documentation and records to facilitate regulatory reviews and build confidence among stakeholders, including customers, partners, and the general public,” says Adnan Masood ... “There are two sides of auditing: the training data side, and the output side. The training data side includes where the data came from, the rights to use it, the outcomes, and whether the results can be traced back to show reasoning and correctness,” says Kevin Marcus. “The output side is trickier. Some algorithms, such as neural networks, are not explainable, and it is difficult to determine why a result is being produced. Other algorithms such as tree structures enable very clear traceability to show how a result is being produced,” Marcus adds. ... Developing explainable AI remains the holy grail and many an AI team is on a quest to find it. Until then, several efforts are underway to develop various ways to audit AI in order to have a stronger grip over its behavior and performance. 


A developer’s guide to the headless data architecture

We call it a “headless” data architecture because of its similarity to a “headless server,” where you have to use your own monitor and keyboard to log in. If you want to process or query your data in a headless data architecture, you will have to bring your own processing or querying “head” and plug it into the data — for example, Trino, Presto, Apache Flink, or Apache Spark. A headless data architecture can encompass multiple data formats, with data streams and tables as the two most common. Streams provide low-latency access to incremental data, while tables provide efficient bulk-query capabilities. Together, they give you the flexibility to choose the format that is most suitable for your use cases, whether it’s operational, analytical, or somewhere in between. ... Many businesses today are building their own headless data architectures, even if they’re not quite calling it that yet, though using cloud services tends to be the easiest and most popular way to get started. If you’re building your own headless data architecture, it’s important to first create well-organized and schematized data streams, before populating them into Apache Iceberg tables.


The Hidden Costs of the Cloud Skills Gap

Properly managing and scaling cloud resources requires expertise in load balancing, auto-scaling, and cost optimization. Without these skills, companies may face inefficiencies, either by over-provisioning or under-utilizing resources. Inexperienced or overstretched staff might struggle with performance optimization, resulting in slower applications and services, which can negatively impact user satisfaction and harm the company's reputation. ... Employees lacking the necessary skills to fully leverage cloud technologies may be less likely to propose innovative solutions or improvements, potentially leading to a lack of new product development and stagnation in business growth. The cloud presents abundant opportunities for innovation, including AI, machine learning, and advanced data analytics. Companies without the expertise to implement these technologies risk missing out on significant competitive advantages and exciting new discoveries. The bottom line is that skilled professionals often drive the adoption of new technologies because they have the knowledge to experiment in the field.


Architectural Retrospectives: The Key to Getting Better at Architecting

The traditional architectural review, especially if conducted by outside parties, often turns into a blame-assignment exercise. The whole point of regular architectural reviews in the MVA approach is to learn from experience so that catastrophic failures never occur. ... The mechanics of running an architectural retrospective session are identical to those of running a Sprint Retrospective in Scrum. In fact, an architectural focus can be added to a more general-purpose retrospective to avoid creating yet another meeting, so long as all the participants are involved in making architectural decisions. This can also be an opportunity to demonstrate that anyone can make an architectural decision, not only the "architects." ... Many teams skip retrospectives because they don’t like to confront their shortcomings, Architectural retrospectives are even more challenging because they examine not just the way the team works, but the way the team makes decisions. But architectural retros have great pay-offs: they can uncover unspoken assumptions and hidden biases that prevent the team from making better decisions. If you retrospect on the way that you create your architecture, you will get better at architecting.


Design flaw has Microsoft Authenticator overwriting MFA accounts, locking users out

Microsoft confirmed the issue but said it was a feature not a bug, and that it was the fault of users or companies that use the app for authentication. Microsoft issued two written statements to CSO Online but declined an interview. Its first statement read: “We can confirm that our authenticator app is functioning as intended. When users scan a QR code, they will receive a message prompt that asks for confirmation before proceeding with any action that might overwrite their account settings. This ensures that users are fully aware of the changes they are making.” One problem with that first statement is that it does not correctly reflect what the message says. The message says: “This action will overwrite existing security information for your account. To prevent being locked out of your account, continue only if you initiated this action from a trusted source.” The first sentence of the warning window is correct, in that the action will indeed overwrite the account. But the second sentence incorrectly tells the user to proceed as long as two conditions are met: that the user initiated the action; and that it is a trusted source.


Automation Resilience: The Hidden Lesson of the CrowdStrike Debacle

Automated updates are nothing new, of course. Antivirus software has included such automation since the early days of the Web, and our computers are all safer for it. Today, such updates are commonplace – on computers, handheld devices, and in the cloud. Such automations, however, aren’t intelligent. They generally perform basic checks to ensure that they apply the update correctly. But they don’t check to see if the update performs properly after deployment, and they certainly have no way of rolling back a problematic update. If the CrowdStrike automated update process had checked to see if the update worked properly and rolled it back once it had discovered the problem, then we wouldn’t be where we are today. ... The good news: there is a technology that has been getting a lot of press recently that just might fit the bill: intelligent agents. Intelligent agents are AI-driven programs that work and learn autonomously, doing their good deeds independently of other software in their environment. As with other AI applications, intelligent agents learn as they go. Humans establish success and failure conditions for the agents and then feed back their results into their models so that they learn how to achieve successes and avoid failures.


Is HIPAA enough to protect patient privacy in the digital era?

HIPAA requires covered entities to establish strong data privacy policies, but it doesn’t regulate cybersecurity standards. HIPAA was deliberately designed to be tech agnostic, on the basis that this would keep it relevant despite frequent technology changes. But this could be a glaring omission. For example, Change Healthcare, a medical insurance claims clearinghouse, experienced a data breach when a hacker used stolen credentials to enter the network. If Change had implemented multi-factor authentication (MFA), a basic cybersecurity measure, the breach might not have taken place. But MFA isn’t specified in the HIPAA Security Rule, which was passed 20 years ago. Cybersecurity in the healthcare industry falls through the cracks of other regulations. The CISA update in early 2024 requires companies in critical infrastructure industries to report cyber incidents within 72 hours of discovery. ... “Crucially, there are many third-parties in the healthcare ecosystem that our members contract with who would not be considered ‘covered entities’ under this proposal, and therefore, would not be obligated to share or disclose that there had been a substantial cyber incident – or any cyber incident at all,” warns Russell Branzell, president and CEO of CHIME.


The downtime dilemma: Why organizations hesitate to switch IT infrastructure providers

Making a switch is not always an easy decision. So, how can a business be sure it’s doing the right thing? There are four boxes that a business should look for its IT infrastructure provider to tick before contemplating a move. Firstly, is the provider there when needed? Reliable round the clock customer support is crucial for addressing any issues that arise before, during, and after a switch. For businesses with small IT departments or limited resources, this external support offers reliable infrastructure management without needing an extensive in-house team. Next, does the provider offer high uptime guarantees and Service Level Agreements (SLAs) outlining compensation for downtime? By prioritizing service providers with Uptime Institute’s tier 4 classification, businesses are opting for a partner that’s certified as fully fault-tolerant, highly resilient, and guaranteeing an uptime of 99.9 percent. This protects the business’ crucial IT systems, keeping them operational despite disruptive activity such as a cyberattack, failing components, or unexpected outages. 


Inside CIOs’ response to the CrowdStrike outage — and the lessons they learned

The first thing Alli did was gather the incident response team to assess the situation and establish the company’s immediate response plan. “We had to ensure that we could maintain business continuity while we addressed the implications of the outage,’’ Alli says. Communication was vital and Alli kept leadership and stakeholders informed about the situation and the steps IT was taking with regular updates. “It’s easy to panic in these situations, but we focused on being transparent and calm, which helped to keep the team grounded,’’ Alli says. Additionally, “The lack of access to critical security insights put us at risk temporarily, but more importantly, it highlighted vulnerabilities in our overall security posture. We had to quickly shift some of our security protocols and rely on other measures, which was a reminder of the importance of having a robust backup plan and redundancies in place,’’ Alli says. Mainiero agrees, saying that in this type of situation, “you have to take on a persona — if you’re panicked, your teams are going to panic.” He says that training has taught him never to raise his voice.


SASE: This Time It’s Personal

Working patterns are changing fast. Millennials and GenZs – the first true digital generation – no longer expect to go to the same place every day. Just as the web broke the link between bricks and mortar and shopping, we are now seeing the disintermediation of the workplace, which is anywhere and everywhere. The trend was accelerated by the pandemic, but it’s a mistake to believe that the pandemic created hybrid working. So, while SASE makes the right assumptions about the need to integrate networking and security, it doesn't go far enough. The networking and security stack is still office-bound and centralized. If you were designing this from the ground up, you wouldn't start from here. A more radical approach, what we call personal SASE, is to left-shift the networking and security stack all the way to the user edge. Think of it like the transition from the mainframe to the minicomputer to the PC in the early 1980s, a rapid migration of compute power to the end user. Personal SASE involves a similar architectural shift with commensurate productivity gains for the modern hybrid workforce, who expect but rarely get the same level of network performance and seamless security that they currently experience when they step into the office.



Quote for the day:

"If you really want the key to success, start by doing the opposite of what everyone else is doing." -- Brad Szollose