Showing posts with label SMB. Show all posts
Showing posts with label SMB. Show all posts

Daily Tech Digest - July 23, 2025


Quote for the day:

“Our chief want is someone who will inspire us to be what we know we could be.” -- Ralph Waldo Emerson


AI in customer communication: the opportunities and risks SMBs can’t ignore

To build consumer trust, businesses must demonstrate that AI genuinely improves the customer experience, especially by enhancing the quality, relevance and reliability of communication. With concerns around data misuse and inaccuracy, businesses need to clearly explain how AI supports secure, accurate and personalized interactions, not just internally but in ways customers can understand and see. AI should be positioned as an enabler of human service, taking care of routine tasks so employees can focus on complex, sensitive or high-value customer needs. A key part of gaining long-term trust is transparency around data. Businesses must clearly communicate how customer information is handled securely and show that AI is being used responsibly and with care. This could include clearly labelling AI-generated communications such as emails or text messages, or proactively informing customers about what data is being used and for what purpose.  ... As conversations move beyond why AI should be used to how it must be used responsibly and effectively, companies have entered a make-or-break “audition phase” for AI. In customer communications, businesses can no longer afford to just talk about AI’s benefits, they must prove them by demonstrating how it enhances quality, security, and personalization.


The Expiring Trust Model: CISOs Must Rethink PKI in the Era of Short-Lived Certificates and Machine Identity

While the risk associated with certificates applies to all companies, it is a greater challenge for businesses operating in regulated sectors such as healthcare, where certificates must often be tied to national digital identity systems. In several countries, healthcare providers and services are now required to issue certificates bound to a National Health Identifier (NHI). These certificates are used for authentication, e-signature and encryption in health data exchanges and must adhere to complex issuance workflows, usage constraints and revocation processes mandated by government frameworks. Managing these certificates alongside public TLS certificates introduces operational complexity that few legacy PKI solutions were designed to handle in today’s dynamic and cloud-native environments. ... The urgency of this mandate is heightened by the impending cryptographic shift driven by the rise of quantum computing. Transitioning to post-quantum cryptography (PQC) will require organizations to implement new algorithms quickly and securely. Frequent certificate renewal cycles, which once seemed a burden, could now become a strategic advantage. When managed through automated and agile certificate lifecycle management, these renewals provide the flexibility to rapidly replace compromised keys, rotate certificate authorities or deploy quantum-safe algorithms as they become standardized.


The CISO code of conduct: Ditch the ego, lead for real

The problem doesn’t stop at vendor interactions. It shows up inside their teams, too. Many CISOs don’t build leadership pipelines; they build echo chambers. They hire people who won’t challenge them. They micromanage strategy. They hoard influence. And they act surprised when innovation dries up or when great people leave. As Jadee Hanson, CISO at Vanta, put it, “Ego builds walls. True leadership builds trust. The best CISOs know the difference.” That distinction matters, especially when your team’s success depends on your ability to listen, adapt, and share the stage. ... Security isn’t just a technical function anymore. It’s a leadership discipline. And that means we need more than frameworks and certifications; we need a shared understanding of how CISOs should show up. Internally, externally, in boardrooms, and in the broader community. That’s why I’m publishing this. Not because I have all the answers, but because the profession needs a new baseline. A new set of expectations. A standard we can hold ourselves, and each other, to. Not about compliance. About conduct. About how we lead. What follows is the CISO Code of Conduct. It’s not a checklist, but a mindset. If you recognize yourself in it, good. If you don’t, maybe it’s time to ask why. Either way, this is the bar. Let’s hold it. ... A lot of people in this space are trying to do the right thing. But there are also a lot of people hiding behind a title.


Phishing simulations: What works and what doesn’t

Researchers conducted a study on the real-world effectiveness of common phishing training methods. They found that the absolute difference in failure rates between trained and untrained users was small across various types of training content. However, we should take this with caution, as the study was conducted within a single healthcare organization and focused only on click rates as the measure of success or failure. It doesn’t capture the full picture. Matt Linton, Google’s security manager, said phishing tests are outdated and often cause more frustration among employees than actually improving their security habits. ... For any training program to work, you first need to understand your organization’s risk. Which employees are most at risk? What do they already know about phishing? Next, work closely with your IT or security teams to create phishing tests that match current threats. Tell employees what to expect. Explain why these tests matter and how they help stop problems. Don’t play the blame game. If someone fails a test, treat it as a chance to learn, not to punish. When you do this, employees are less likely to hide mistakes or avoid reporting phishing emails. When picking a vendor, focus on content and realistic simulations. The system should be easy to use and provide helpful reports.


Reclaiming Control: How Enterprises Can Fix Broken Security Operations

Asset management is critical to the success of the security operations function. In order to properly defend assets, I first and foremost need to know about them and be able to manage them. This includes applying policies, controls, and being able to identify assets and their locations when necessary, of course. With the move to hybrid and multi-cloud, asset management is much more difficult than it used to be. ... Visibility enables another key component of security operations – telemetry collection. Without the proper logging, eventing, and alerting, I can’t detect, investigate, analyze, respond to, and mitigate security incidents. Security operations simply cannot operate without telemetry, and the hybrid and multi-cloud world has made telemetry collection much more difficult than it used to be. ... If a security incident is serious enough, there will need to be a formal incident response. This will involve significant planning, coordination with a variety of stakeholders, regular communications, structured reporting, ongoing analysis, and a post-incident evaluation once the response is wrapped up. All of these steps are complicated by hybrid and multi-cloud environments, if not made impossible altogether. The security operations team will not be able to properly engage in incident response if they are lacking the above capabilities, and having a complex environment is not an excuse.


Legacy No More: How Generative AI is Powering the Next Wave of Application Modernization in India

Choosing the right approach to modernise your legacy systems is a task. Generative AI helps overcome the challenges faced in legacy systems and accelerates modernization. For example, it can be used to understand how legacy systems function through detailed business requirements. The resulting documents can be used to build new systems on the cloud in the second phase. This can make the process cheaper, too, and thus easier to get business cases approved. Additionally, generative AI can help create training documents for the current system if the organization wants to continue using its mainframes. In one example, generative AI might turn business models into microservices, API contracts, and database schemas ready for cloud-native inclusion. ... You need to have a holistic assessment of your existing system to implement generative AI effectively. Leaders must assess obsolete modules, interdependencies, data schemas, and throughput constraints to pinpoint high-impact targets and establish concrete modernization goals. Revamping legacy applications with generative AI starts with a clear understanding of the existing system. Organizations must conduct a thorough evaluation, mapping performance bottlenecks, obsolete modules, entanglements, and intricacies of the data flow, to create a modernization roadmap.


A Changing of the Guard in DevOps

Asimov, a newcomer in the space, is taking a novel approach — but addressing a challenge that’s as old as DevOps itself. According to the article, the team behind Asimov has zeroed in on a major time sink for developers: The cognitive load of understanding deployment environments and platform intricacies. ... What makes Asimov stand out is not just its AI capability but its user-centric focus. This isn’t another auto-coder. This is about easing the mental burden, helping engineers think less about YAML files and more about solving business problems. It’s a fresh coat of paint on a house we’ve been renovating for over a decade. ... Whether it’s a new player like Asimov or stalwarts like GitLab and Harness, the pattern is clear: AI is being applied to the same fundamental problems that have shaped DevOps from the beginning. The goals haven’t changed — faster cycles, fewer errors, happier teams — but the tools are evolving. Sure, there’s some real innovation here. Asimov’s knowledge-centric approach feels genuinely new. GitLab’s AI agents offer a logical evolution of their existing ecosystem. Harness’s plain-language chat interface lowers the barrier to entry. These aren’t just gimmicks. But the bigger story is the convergence. AI is no longer an outlier or an optional add-on — it’s becoming foundational. And as these solutions mature, we’re likely to see less hype and more impact.


Data Protection vs. Cyber Resilience: Mastering Both in a Complex IT Landscape

Traditional disaster recovery (DR) approaches designed for catastrophic events and natural disasters are still necessary today, but companies must implement a more security-event-oriented approach on top of that. Legacy approaches to disaster recovery are insufficient in an environment that is rife with cyberthreats as these approaches focus on infrastructure, neglecting application-level dependencies and validation processes. Further, threat actors have moved beyond interrupting services and now target data to poison, encrypt or exfiltrate it. ... Cyber resilience is now essential. With ransomware that can encrypt systems in minutes, the ability to recover quickly and effectively is a business imperative. Therefore, companies must develop an adaptive, layered strategy that evolves with emerging threats and aligns with their unique environment, infrastructure and risk tolerance. To effectively prepare for the next threat, technology leaders must balance technical sophistication with operational discipline as the best defence is not solely a hardened perimeter, it’s also having a recovery plan that works. Today, companies cannot afford to choose between data protection and cyber resilience, they must master both.


Anthropic researchers discover the weird AI problem: Why thinking longer makes models dumber

The findings challenge the prevailing industry wisdom that more computational resources devoted to reasoning will consistently improve AI performance. Major AI companies have invested heavily in “test-time compute” — allowing models more processing time to work through complex problems — as a key strategy for enhancing capabilities. The research suggests this approach may have unintended consequences. “While test-time compute scaling remains promising for improving model capabilities, it may inadvertently reinforce problematic reasoning patterns,” the authors conclude. For enterprise decision-makers, the implications are significant. Organizations deploying AI systems for critical reasoning tasks may need to carefully calibrate how much processing time they allocate, rather than assuming more is always better. ... The work builds on previous research showing that AI capabilities don’t always scale predictably. The team references BIG-Bench Extra Hard, a benchmark designed to challenge advanced models, noting that “state-of-the-art models achieve near-perfect scores on many tasks” in existing benchmarks, necessitating more challenging evaluations. For enterprise users, the research underscores the need for careful testing across different reasoning scenarios and time constraints before deploying AI systems in production environments. 


How to Advance from SOC Manager to CISO?

Strategic thinking demands a firm grip on the organization's core operations, particularly how it generates revenue and its key value streams. This perspective allows security professionals to align their efforts with business objectives, rather than operating in isolation. ... This is related to strategic thinking but emphasizes knowledge of risk management and finance. Security leaders must factor in financial impacts to justify security investments and manage risks effectively. Balancing security measures with user experience and system availability is another critical aspect. If security policies are too strict, productivity can suffer; if they're too permissive, the company can be exposed to threats. ... Effective communication is vital for translating technical details into language senior stakeholders can grasp and act upon. This means avoiding jargon and abbreviations to convey information in a simplistic manner that resonates with multiple stakeholders, including executives who may not have a deep technical background. Communicating the impact of security initiatives in clear, concise language ensures decisions are well-informed and support company goals. ... You will have to ensure technical services meet business requirements, particularly in managing service delivery, implementing change, and resolving issues. All of this is essential for a secure and efficient IT infrastructure.

Daily Tech Digest - May 17, 2025


Quote for the day:

“Only those who dare to fail greatly can ever achieve greatly.” -- Robert F.


Top 10 Best Practices for Effective Data Protection

Your first instinct may be to try to keep up with all your data, but this may be a fool's errand. The key to success is to have classification capabilities everywhere data moves, and rely on your DLP policy to jump in when risk arises. Automation in data classification is becoming a lifesaver thanks to the power of AI. AI-powered classification can be faster and more accurate than traditional ways of classifying data with DLP. Ensure any solution you are evaluating can use AI to instantly uncover and discover data without human input. ... Data loss prevention (DLP) technology is the core of any data protection program. That said, keep in mind that DLP is only a subset of a larger data protection solution. DLP enables the classification of data (along with AI) to ensure you can accurately find sensitive data. Ensure your DLP engine can consistently alert correctly on the same piece of data across devices, networks, and clouds. The best way to ensure this is to embrace a centralized DLP engine that can cover all channels at once. Avoid point products that bring their own DLP engine, as this can lead to multiple alerts on one piece of moving data, slowing down incident management and response. Look to embrace Gartner's security service edge approach, which delivers DLP from a centralized cloud service. 


4 Keys To Successful Change Management From The Bain Playbook

From the start, Bain was crystal clear about its case for change, according to Razdan. The company prioritized change management, which meant IT partnering with finance; it also meant cultivating a mindset conducive to change. “We owned the change; we identified a group of high performers within our finance and our IT teams. This community of super-users could readily identify and deal with any of the problems that typically arise in an implementation of this size and scale,” Mackey said. “This was less just changing their technology; it’s changing employee behaviors and setting us up for how we want to grow and change processes going forward.” ... “We actually set up a program to be always measuring the value,” Razdan said. “You have internal stakeholders, you have external stakeholders, you have partnerships; we kind of built an ecosystem of governance and partnership that enabled us to keep everybody on the same page because transparency and communication is critical to success.” Gauging progress via transparent key performance indicators was all the more impressive, given that most of this happened during the worldwide, pandemic-driven move to remote work. “We could assess the implementation, as we went through it, to keep us on track [and] course correct,” Mackey said. 


Emerging AI security risks exposed in Pangea's global study

A significant finding was the non-deterministic nature of large language model (LLM) security. Prompt injection attacks, a method where attackers manipulate input to provoke undesired responses from AI systems, were found to succeed unpredictably. An attack that fails 99 times could succeed on the 100th attempt with identical input, due to the underlying randomness in LLM processing. The study also revealed substantial risks of data leakage and adversarial reconnaissance. Attackers using prompt injection can manipulate AI models to disclose sensitive information or contextual details about the environment in which the system operates, such as server types and network access configurations. 'This challenge has given us unprecedented visibility into real-world tactics attackers are using against AI applications today,' said Oliver Friedrichs, Co-Founder and Chief Executive Officer of Pangea. 'The scale and sophistication of attacks we observed reveal the vast and rapidly evolving nature of AI security threats. Defending against these threats must be a core consideration for security teams, not a checkbox or afterthought.' Findings indicated that basic defences, such as native LLM guardrails, left organisations particularly exposed. 


Dynamic DNS Emerges as Go-to Cyberattack Facilitator

Dynamic DNS (DDNS) services automatically update a domain name's DNS records in real-time when the Internet service provider changes the IP address. Real-time updating for DNS records wasn't needed in the early days of the Internet when static IP addresses were the norm. ... It sounds simple enough, yet bad actors have abused the services for years. More recently, though, cybersecurity vendors have observed an increase in such activity, especially this year. The notorious cybercriminal collective Scattered Spider, for instance, has turned to DDNS to obfuscate its malicious activity and impersonate well-known brands in social engineering attacks. This trend has some experts concerned about a rise in abuse and a surge in "rentable" subdomains. ... In an example of an observed attack, Scattered Spider actors established a new subdomain, klv1.it[.]com, designed to impersonate a similar domain, klv1.io, for Klaviyo, a Boston-based marketing automation company. Silent Push's report noted that the malicious domain had just five detections on VirusTotal at the time of publication. The company also said the use of publicly rentable subdomains presents challenges for security researchers. "This has been something that a lot of threat actors do — they use these services because they won't have domain registration fingerprints, and it makes it harder to track them," says Zach Edwards, senior threat researcher at Silent Push.


The Growing and Changing Threat of Deepfake Attacks

To ensure their deepfake attacks are convincing, malicious actors are increasingly focusing on more believable delivery, enhanced methods, such as phone number spoofing, SIM swapping, malicious recruitment accounts and information-stealing malware. These methods allow actors to convincingly deliver deepfakes and significantly increase a ploy’s overall credibility. ... High-value deepfake targets, such as C-suite executives, key data custodians, or other significant employees, often have moderate to high volumes of data available publicly. In particular, employees appearing on podcasts, giving interviews, attending conferences, or uploading videos expose significant volumes of moderate- to high-quality data for use in deepfakes. This dictates that understanding individual data exposure becomes a key part of accurately assessing the overall enterprise risk of deepfakes. Furthermore, ACI research indicates industries such as consulting, financial services, technology, insurance and government often have sufficient publicly available data to enable medium-to high-quality deepfakes. Ransomware groups are also continuously leaking a high volume of enterprise data. This information can help fuel deepfake content to “talk” about genuine internal documents, employee relationships and other internal details. 


Binary Size Matters: The Challenges of Fitting Complex Applications in Storage-Constrained Devices

Although we are here focusing on software, it is important to say that software does not run in a vacuum. Having an understanding of the hardware our programs run on and even how hardware is developed can offer important insights into how to tackle programming challenges. In the software world, we have a more iterative process, new features and fixes can usually be incorporated later in the form of over-the-air updates, for example. That is not the case with hardware. Design errors and faults in hardware can at the very best be mitigated with considerable performance penalties. These errors can introduce the meltdown and spectre vulnerabilities, or render the whole device unusable. Therefore the hardware design phase has a much longer and rigorous process before release than the software design phase. This rigorous process also impacts design decisions in terms of optimizations and computational power. Once you define a layout and bill of materials for your device, the expectation is to keep this constant for production as long as possible in order to reduce costs. Embedded hardware platforms are designed to be very cost-effective. Designing a product whose specifications such as memory or I/O count are wasted also means a cost increase in an industry where every cent in the bill of materials matters.


Cyber Insurance Applications: How vCISOs Bridge the Gap for SMBs

Proactive risk evaluation is a game-changer for SMBs seeking to maintain robust insurance coverage. vCISOs conduct regular risk assessments to quantify an organization’s security posture and benchmark it against industry standards. This not only identifies areas for improvement but also helps maintain compliance with evolving insurer expectations. Routine audits—led by vCISOs—keep security controls effective and relevant. Third-party risk evaluations are particularly valuable, given the rise in supply chain attacks. By ensuring vendors meet security standards, SMBs reduce their overall risk profile and strengthen their position during insurance applications and renewals. Employee training programs also play a critical role. By educating staff on phishing, social engineering, and other common threats, vCISOs help prevent incidents before they occur. ... For SMBs, navigating the cyber insurance landscape is no longer just a box-checking exercise. Insurers demand detailed evidence of security measures, continuous improvement, and alignment with industry best practices. vCISOs bring the technical expertise and strategic perspective necessary to meet these demands while empowering SMBs to strengthen their overall security posture.


How to establish an effective AI GRC framework

Because AI introduces risks that traditional GRC frameworks may not fully address, such as algorithmic bias and lack of transparency and accountability for AI-driven decisions, an AI GRC framework helps organizations proactively identify, assess, and mitigate these risks, says Heather Clauson Haughian, co-founding partner at CM Law, who focuses on AI technology, data privacy, and cybersecurity. “Other types of risks that an AI GRC framework can help mitigate include things such as security vulnerabilities where AI systems can be manipulated or exposed to data breaches, as well as operational failures when AI errors lead to costly business disruptions or reputational harm,” Haughian says. ... Model governance and lifecycle management are also key components of an effective AI GRC strategy, Haughian says. “This would cover the entire AI model lifecycle, from data acquisition and model development to deployment, monitoring, and retirement,” she says. This practice will help ensure AI models are reliable, accurate, and consistently perform as expected, mitigating risks associated with model drift or errors, Haughian says. ... Good policies balance out the risks and opportunities that AI and other emerging technologies, including those requiring massive data, can provide, Podnar says. “Most organizations don’t document their deliberate boundaries via policy,” Podnar says. 


How to Keep a Consultant from Stealing Your Idea

The best defense is a good offense, Thirmal says. Before sharing any sensitive information, get the consultant to sign a non-disclosure agreement (NDA) and, if needed, a non-compete agreement. "These legal documents set clear boundaries on what can and can't do with your ideas." He also recommends retaining records -- meeting notes, emails, and timestamps -- to provide documented proof of when and where the idea in question was discussed. ... If a consultant takes an idea and commercializes it, or shares it with a competitor, it's time to consult legal counsel, Paskalev says. The legal case's strength will hinge on the exact wording within contracts and documentation. "Sometimes, a well-crafted cease-and-desist letter is enough; other times, litigation is required." ... The best way to protect ideas isn't through contracts -- it's by being proactive, Thirmal advises. "Train your team to be careful about what they share, work with consultants who have strong reputations, and document everything," he states. "Protecting innovation isn’t just a legal issue -- it's a strategic one." Innovation is an IT leader's greatest asset, but it's also highly vulnerable, Paskalev says. "By proactively structuring consultant agreements, meticulously documenting every stage of idea development, and being ready to enforce protection, organizations can ensure their competitive edge."


Even the Strongest Leaders Burn Out — Here's the Best Way to Shake the Fatigue

One of the most overlooked challenges in leadership is the inability to step back from the work and see the full picture. We become so immersed in the daily fires, the high-stakes meetings, the make-or-break moments, that we lose the ability to assess the battlefield objectively. The ocean, or any intense, immersive activity, provides that critical reset. But stepping away isn't just about swimming in the ocean. It's about breaking patterns. Leaders are often stuck in cycles — endless meetings, fire drills, back-to-back calls. The constant urgency can trick you into believing that everything is critical. That's why you need moments that pull you out of the daily grind, forcing you to reset before stepping back in. This is where intentional recovery becomes a strategic advantage. Top-performing leaders across industries — from venture capitalists to startup founders — intentionally carve out time for activities that challenge them in different ways. ... The most effective leaders understand that managing their energy is just as important as managing their time. When energy levels dip, cognitive function suffers, and decision-making becomes less strategic. That's why companies known for their progressive workplace cultures integrate mindfulness practices, outdoor retreats and wellness programs — not as perks, but as necessary investments in long-term performance.

Daily Tech Digest - April 01, 2025


Quote for the day:

"Strategy is not really a solo sport _ even if you_re the CEO." -- Max McKeown


MCP: The new “USB-C for AI” that’s bringing fierce rivals together

So far, MCP has also garnered interest from multiple tech companies in a rare show of cross-platform collaboration. For example, Microsoft has integrated MCP into its Azure OpenAI service, and as we mentioned above, Anthropic competitor OpenAI is on board. Last week, OpenAI acknowledged MCP in its Agents API documentation, with vocal support from the boss upstairs. "People love MCP and we are excited to add support across our products," wrote OpenAI CEO Sam Altman on X last Wednesday. ... To make the connections behind the scenes between AI models and data sources, MCP uses a client-server model. An AI model (or its host application) acts as an MCP client that connects to one or more MCP servers. Each server provides access to a specific resource or capability, such as a database, search engine, or file system. When the AI needs information beyond its training data, it sends a request to the appropriate server, which performs the action and returns the result. To illustrate how the client-server model works in practice, consider a customer support chatbot using MCP that could check shipping details in real time from a company database. "What's the status of order #12345?" would trigger the AI to query an order database MCP server, which would look up the information and pass it back to the model. 


Why global tensions are a cybersecurity problem for every business

As global polarization intensifies, cybersecurity threats have become increasingly hybridized, complicating the landscape for threat attribution and defense. Michael DeBolt, Chief Intelligence Officer at Intel 471, explains: “Increasing polarization worldwide has seen the expansion of the state-backed threat actor role, with many established groups taking on financially motivated responsibilities alongside their other strategic goals.” This evolution is notably visible in threat actors tied to countries such as China, Iran, and North Korea. According to DeBolt, “Heightened geopolitical tensions have reflected this transition in groups originating from China, Iran, and North Korea over the last couple of years—although the latter is somewhat more well-known for its duplicitous activity that often blurs the line of more traditional e-crime threats.” These state-backed groups increasingly blend espionage and destructive attacks with financially motivated cybercrime techniques, complicating attribution and creating significant practical challenges for organizations. DeBolt highlights the implications: “A primary practical issue organizations are facing is threat attribution, with a follow-on issue being maintaining an effective security posture against these hybrid threats.”


How to take your first steps in AI without falling off a cliff

It is critical to bring all stakeholders on board through education and training on the fundamental building blocks of data and AI. This involves understanding what’s accessible in the market and differentiating between various AI technologies. Executive buy-in is crucial, and by planning for internal process outcomes first, organisations can better position themselves to achieve meaningful outcomes in the future. ... Don’t bite off more than you can chew! Trying to deploy a complex AI solution to the entire organisation is asking for trouble. It is better to identify early adopter departments where specific AI pilots and proofs of concept can be introduced and their value measured. Eventually, you might establish an AI assistant studio to develop dedicated AI tools for each use case according to individual needs. ... People are often wary of change, particularly change with such far reaching implications in terms of how we work. Clear communication, training, and ongoing support will all help reassure employees who fear being left behind. ... In the context of data and AI, the perspective shifts somewhat. Most organisations already have policies in place for public cloud adoption. However, the approach to AI and data must be more nuanced, given the vast potential of the technology involved. 


6 hard-earned tips for leading through a cyberattack — from CSOs who’ve been there

Authority under crisis is meaningless if you can’t establish followership. And this goes beyond the incident response team: CISOs must communicate with the entire organization — a commonly misunderstood imperative, says Pablo Riboldi, CISO of nearshore talent provider BairesDev. ... “Organizations should provide training on stress management and decision-making under pressure, which includes perhaps mental health support resources in the incident response plan,” Ngui says. Larry Lidz, vice president of CX Security at Cisco, also advocates for tabletop exercises as a way to get employees to “look at problems through a different set of lenses than they would otherwise look at them.” ... Remaining calm in the face of a cyberattack can be challenging, but prime performance requires it, New Relic’s Gutierrez says. “There’s a lot of reaction. There’s a lot of strong feelings and emotions that go on during incidents,” Gutierrez says. Although they had moments of not maintaining composure, Gutierrez says they have been generally calm under cyber duress, which they take pride in. Demonstrating composure as a leader under fire is important because it can influence how others feel, behave, and act.


A “Measured” Approach to Building a World-Class Offensive Security Program

First, mapping the top threats and threat actors, most likely to find your organization an attractive target. Second, the top “crown jewel” systems they would target for compromise. Remaining at the enterprise level, the next step is to establish an internal framework and underlying program that graphs threats and risks, and provides a repeatable mechanism to track and refresh that understanding over time. This includes graphs of all enterprise systems, and their associated connections and dependencies, as well as attack graphs that represent all the potential paths through your architecture that would lead an attacker to their prize. Finally, the third element is an architectural security review that discerns from the graphs what paths are most possible and probable. Installing a program that guides and tracks three activities will also pay dividends down the line in better informing and increasing the efficacy of adversarial simulations. We all know the devil resides in the details. At this stage we begin understanding the actual vulnerability of individual assets and systems. The first step is a comprehensive inventory of elements that exist across the organization. This includes internal endpoint assets, and external perimeter and cloud systems. As you’d likely expect, the next step is vulnerability scanning of the full asset inventory that was established. 


How AI Agents Are Quietly Transforming Frontend Development

Traditional developer tools are passive. You run a linter, and it tells you what’s wrong. You run a build tool, and it compiles. But AI agents are proactive. They don’t wait for instructions; they interpret high-level goals and try to execute them. Want to improve page performance? An agent can analyze your critical rendering path, optimize image sizes, and suggest lazy loading. Want a dark mode implemented across your UI library? It can crawl through your components and offer scoped changes that preserve brand integrity. ... Frontend development has always been plagued by complexity. Thousands of packages, constantly changing frameworks, and pixel-perfect demands from designers. AI agents bring sanity to the chaos, rendering cloud security the only thing to worry about. But if you decide to run an agent locally, that problem is resolved as well. They can serve as design-to-code translators, turning Figma files into functional components. They can manage breakpoints, ARIA attributes, and responsive behaviors automatically. They can even test components for edge cases by generating test scenarios that a developer might miss. Because these agents are always “on,” they notice patterns developers sometimes overlook. That dropdown menu that breaks on Safari 14? Flagged. That padding inconsistency between modals? Caught.


Agentic AI won’t make public cloud providers rich

Agentic AI isn’t what most people think it is. When I look at these systems, I see something fundamentally different from the brute-force AI approaches we’re accustomed to. Consider agentic AI more like a competent employee than a powerful calculator. What’s fascinating is how these systems don’t need centralized processing power. Instead, they operate more like distributed networks, often running on standard hardware and coordinating across different environments. They’re clever about using resources, pulling in specialized small language models when needed, and integrating with external services on demand. The real breakthrough isn’t about raw power—it’s about creating more intelligent, autonomous systems that can efficiently accomplish tasks. The big cloud providers emphasize their AI and machine learning capabilities alongside data management and hybrid cloud solutions, whereas agentic AI systems are likely to take a more distributed approach. These systems will integrate with large language models primarily as external services rather than core components. This architectural pattern favors smaller, purpose-built language models and distributed processing over centralized cloud resources. Ask me how I know. I’ve built dozens for my clients recently.


Cloud a viable choice amid uncertain AI returns

Enterprises can restrict data using internal controls and limit data movement to chosen geographical locations. The cluster can be customized and secured to meet the specific requirements of the enterprise without the constraints of using software or hardware configured and operated by a third party. Given these characteristics, for convenience, Uptime Institute has labeled the method as “best” in terms of customization and control. ... The challenge for enterprises is determining whether the added reassurance of dedicated infrastructure provides a real return on its substantial premium over the “better” option. Many large organizations - from financial services to healthcare - already use the public cloud to hold sensitive data. To secure data, an organization may encrypt data at rest and in transit, configure appropriate access controls, such as security groups, and set up alerts and monitoring. Many cloud providers have data centers approved for government use. It is unreasonable to view the cloud as inherently insecure or non-compliant, considering its broad use across many industries. Although dedicated infrastructure gives reassurance that data is being stored and processed at a particular location, it is not necessarily more secure or compliant than the cloud. 


Why no small business is too small for hackers - and 8 security best practices for SMBs

To be clear, the size of your business isn't particularly relevant to bulk attacks. It's merely the fact that you are one of many businesses that can be targeted through random IP number generation or email harvesting or some other process that makes it very, very cost-effective for a hacker to be able to deliver a piece of malware that opens up computers in your business for opportunistic activities. ... Attackers -- who could be affiliated with organized crime groups, individual hackers, or even teams funded by nation-states -- often use pre-built hacking tools they can deploy without a tremendous amount of research and development. For hackers, this tactic is roughly the equivalent of downloading an app from an app store, although the hacking tools are usually purchased or downloaded from hacker-oriented websites and hidden forums (what some folks call "the dark web"). ... "Many SMB owners assume cybersecurity is too costly or too complex and think they don't have the IT knowledge or resources to set up reliable security. Few realize that they could set up security in a half hour. Moreover, the lack of dedicated cyber staff further complicates the situation for SMBs, making it even more daunting to implement and manage effective security measures."


AI is making the software supply chain more perilous than ever

The software supply chain is a link in modern IT environments that is as crucial as it is vulnerable. The new research report by JFrog, released during KubeCon + CloudNativeCon Europe in London, shows that organizations are struggling with increasing threats that are amplified by, how could it be otherwise, the rise of AI. ... The report identifies a “quad-fecta” of threats to the integrity and security of the software supply chain: vulnerabilities (CVEs), malicious packages, exposed secrets and configuration errors/human error. JFrog’s research team detected no fewer than 25,229 exposed secrets and tokens in public repositories – an increase of 64% compared to last year. Worryingly, 27% of these exposed secrets were still active. This interwoven set of security dangers makes it particularly difficult for organizations to keep their digital walls consistently in order. ... “More is not always better,” the report states. The collection of tools can make organizations more vulnerable due to increased complexity for developers. At the same time, visibility in the programming code remains a problem: only 43% of IT professionals say that their organization applies security scans at both the code and binary level. This is a decrease from 56% compared to last year and indicates that teams still have large blind spots when identifying software risks.

Daily Tech Digest - December 29, 2024

AI agents may lead the next wave of cyberattacks

“Many organizations run a pen test on maybe an annual basis, but a lot of things change within an application or website in a year,” he said. “Traditional cybersecurity organizations within companies have not been built for constant self-penetration testing.” Stytch is attempting to improve upon what McGinley-Stempel said are weaknesses in popular authentication schemes of such as the Completely Automated Public Turing test to tell Computers and Humans Apart, or captcha, a type of challenge-response test used to determine whether a user interacting with a system is a human or an bot. Captcha codes may require users to decipher scrambled letters or count the number of traffic lights in an image. ... “If you’re just going to fight machine learning models on the attacking side with ML models on the defensive side, you’re going to get into some bad probabilistic situations that are not going to necessarily be effective,” he said. Probabilistic security provides protections based on probabilities but assumes that absolute security can’t be guaranteed. Stytch is working on deterministic approaches such as fingerprinting, which gathers detailed information about a device or software based on known characteristics and can provide a higher level of certainty that the user is who they say they are.


How businesses can ensure cloud uptime over the holidays

To ensure uptime during the holidays, best practice should include conducting pre-holiday stress tests to identify system vulnerabilities and configure autoscaling to handle demand surges. Experts also recommend simulating failures through chaos engineering to expose weaknesses. Redundancy across regions or availability zones is essential, as is a well-documented incident response plan – with clear escalation paths – “as this allows a team to address problems quickly even with reduced staffing,” says VimalRaj Sampathkumar, technical head – UKI at software company ManageEngine. It’s all about understanding the business requirements and what your demand is going to look like, says Luan Hughes, chief information officer (CIO) at tech provider Telent, as this will vary from industry to industry. “When we talk about preparedness, we talk a lot about critical incident management and what happens when big things occur, but I think you need to have an appreciation of what your triggers are,” she says. ... It’s also important to focus on your people as much as your systems, she adds, noting that it’s imperative to understand your management processes, out-of-hours and on-call rota and how you action support if problems do arise.


Tech worker movements grow as threats of RTO, AI loom

While layoffs likely remain the most extreme threat to tech workers broadly, a return-to-office (RTO) mandate can be just as jarring for remote tech workers who are either unable to comply or else unwilling to give up the better work-life balance that comes with no commute. Advocates told Ars that RTO policies have pushed workers to join movements, while limited research suggests that companies risk losing top talents by implementing RTO policies. ... Other companies mandating RTO faced similar backlash from workers, who continued to question the logic driving the decision. One February study showed that RTO mandates don't make companies any more valuable but do make workers more miserable. And last month, Brian Elliott, an executive advisor who wrote a book about the benefits of flexible teams, noted that only one in three executives thinks RTO had "even a slight positive impact on productivity." But not every company drew a hard line the way that Amazon did. For example, Dell gave workers a choice to remain remote and accept they can never be eligible for promotions, or mark themselves as hybrid. Workers who refused the RTO said they valued their free time and admitted to looking for other job opportunities.


Navigating the cloud and AI landscape with a practical approach

When it comes to AI or genAI, just like everyone else, we started with use cases that we can control. These include content generation, sentiment analysis and related areas. As we explored these use cases and gained understanding, we started to dabble in other areas. For example, we have an exciting use case for cleaning up our data that leverages genAI as well as non-generative machine learning to help us identify inaccurate product descriptions or incorrect classifications and then clean them up and regenerate accurate, standardized descriptions. ... While this might be driving internal productivity, you also must think of it this way: As a distributor, at any one time, we deal with millions of parts. Our supplier partners keep sending us their price books, spec sheets and product information every quarter. So, having a group of people trying to go through all that data to find inaccuracies is a daunting, almost impossible, task. But with AI and genAI capabilities, we can clean up any inaccuracies far more quickly than humans could. Sometimes within as little as 24 hours. That helps us improve our ability to convert and drive business through an improved experience for our customers.


When the System Fights Back: A Journey into Chaos Engineering

Enter chaos engineering — the art of deliberately creating disaster to build stronger systems. I’d read about Netflix’s Chaos Monkey, a tool designed to randomly kill servers in production, and I couldn’t help but admire the audacity. What if we could turn our system into a fighter — one that could take a punch and still come out swinging? ... Chaos engineering taught me more than I expected. It’s not just a technical exercise; it’s a mindset. It’s about questioning assumptions, confronting fears, and embracing failure as a teacher. We integrated chaos experiments into our CI/CD pipeline, turning them into regular tests. Post-mortems became celebrations of what we’d learned, rather than finger-pointing sessions. And our systems? Stronger than ever. But chaos engineering isn’t just about the tech. It’s about the culture you build around it. It’s about teaching your team to think like detectives, to dig into logs and metrics with curiosity instead of dread. It’s about laughing at the absurdity of breaking things on purpose and marveling at how much you learn when you do. So here’s my challenge to you: embrace the chaos. Whether you’re running a small app or a massive platform, the principles hold true. 


Enhancing Your Company’s DevEx With CI/CD Strategies

CI/CD pipelines are key to an engineering organization’s efficiency, used by up to 75% of software companies with developers interacting with them daily. However, these CI/CD pipelines are often far from being the ideal tool to work with. A recent survey found that only 14% of practitioners go from code to production in less than a day when high-performing teams should be able to deploy multiple times a day. ... Merging, building, deploying and running are all classic steps of a CI/CD pipeline, often handled by multiple tools. Some organizations have SREs that handle these functions, but not all developers are that lucky! In that case, if a developer wants to push code where a pipeline isn’t set up — which can be quite recurring with the rise of microservices — they must assemble those rarely-used tools. However, this will disturb the flow state you wish your developers to remain in. ... Troubleshooting issues within a CI/CD pipeline can be challenging for developers due to the need for more visibility and information. These processes often operate as black boxes, running on servers that developers may not have direct access to with software that is foreign to developers. Consequently, developers frequently rely on DevOps engineers — often understaffed — to diagnose problems, leading to slow feedback loops.


How to Architect Software for a Greener Future

Code efficiency is something that the platforms and the languages should make easy for us. They should do the work, because that's their area of expertise, and we should just write code. Yes, of course, write efficient code, but it's not a silver bullet. What about data center efficiency, then? Surely, if we just made our data center hyper efficient, we wouldn't have to worry. We could just leave this problem to someone else. ... It requires you to do some thinking. It also requires you to orchestrate this in some type of way. One way to do this is autoscaling. Let's talk about autoscaling. We have the same chart here but we have added demand. Autoscaling is the simple concept that when you have more demand, you use more resources and you have a bigger box, virtual machine, for example. The key here is very easy to do the first thing. We like to do this, "I think demand is going to go up, provision more, have more space. Yes, I feel safe. I feel secure now". Going the other way is a little scarier. It's actually just as important as compared to sustainability. Otherwise, we end up in the first scenario where we are incorrectly sized for our resource use. Of course, this is a good tool to use if you have a variability in demand. 


Tech Trends 2025 shines a light on the automation paradox – R&D World

The surge in AI workloads has prompted enterprises to invest in powerful GPUs and next-generation chips, reinventing data centers as strategic resources. ... As organizations race to tap progressively more sophisticated AI systems, hardware decisions once again become integral to resilience, efficiency and growth, while leading to more capable “edge” deployments closer to humans and not just machines. As Tech Trends 2025 noted, “personal computers embedded with AI chips are poised to supercharge knowledge workers by providing access to offline AI models while future-proofing technology infrastructure, reducing cloud computing costs, and enhancing data privacy.” ... Data is the bedrock of effective AI, which is why “bad inputs lead to worse outputs—in other words, garbage in, garbage squared,” as Deloitte’s 2024 State of Generative AI in the Enterprise Q3 report observes. Fully 75% of surveyed organizations have stepped up data-life-cycle investments because of AI. Layer a well-designed data framework beneath AI, and you might see near-magic; rely on half-baked or biased data, and you risk chaos. As a case in point, Vancouver-based LIFT Impact Partners fine-tuned its AI assistants on focused, domain-specific data to help Canadian immigrants process paperwork—a far cry from scraping the open internet and hoping for the best.


What Happens to Relicensed Open Source Projects and Their Forks?

Several companies have relicensed their open source projects in the past few years, so the CHAOSS project decided to look at how an open source project’s organizational dynamics evolve after relicensing, both within the original project and its fork. Our research compares and contrasts data from three case studies of projects that were forked after relicensing: Elasticsearch with fork OpenSearch, Redis with fork Valkey, and Terraform with fork OpenTofu. These relicensed projects and their forks represent three scenarios that shed light on this topic in slightly different ways. ... OpenSearch was forked from Elasticsearch on April 12, 2021, under the Apache 2.0 license, by the Amazon Web Services (AWS) team so that it could continue to offer this service to its customers. OpenSearch was owned by Amazon until September 16, 2024, when it transferred the project to the Linux Foundation. ... OpenTofu was forked from Terraform on Aug. 25, 2023, by a group of users as a Linux Foundation project under the MPL 2.0. These users were starting from scratch with the codebase since no contributors to the OpenTofu repository had previously contributed to Terraform.


Setting up a Security Operations Center (SOC) for Small Businesses

In today's digital age, security is not an option for any business irrespective of its size. Small Businesses equally face increasing cyber threats, making it essential to have robust security measures in place. A SOC is a dedicated team responsible for monitoring, detecting, and responding to cybersecurity incidents in real-time. It acts as the frontline defense against cyber threats, helping to safeguard your business's data, reputation, and operations. By establishing a SOC, you can proactively address security risks and enhance your overall cybersecurity posture. The cost of setting up a SOC for a small business may be prohibitive, in which case, the businesses may look at engaging Managed Service Providers for the whole or part of the services. ... Establishing clear, well-defined processes is vital for the smooth functioning of your SOC. NIST Cyber Security Framework could be a good fit for all businesses and one can define the processes that are essential and relevant considering the size, threat landscape and risk tolerance of the business. ... Continuous training and development are essential for keeping your SOC team prepared to handle evolving threats. Offer regular training sessions, certifications, and workshops to enhance their skills and knowledge. 



Quote for the day:

"Hardships often prepare ordinary people for an extraordinary destiny." -- C.S. Lewis

Daily Tech Digest - June 28, 2024

AI success: Real or hallucination?

The biggest problem may not be compliance muster, but financial muster. If AI is consuming hundreds of thousands of GPUs per year, requiring that those running AI data centers canvas frantically in search of the power needed to drive these GPUs and to cool them, somebody is paying to build AI, and paying a lot. Users report that the great majority of the AI tools they use are free. Let me try to grasp this; AI providers are spending big to…give stuff away? That’s an interesting business model, one I personally wish was more broadly accepted. But let’s be realistic. Vendors may be willing to pay today for AI candy, but at some point AI has to earn its place in the wallets of both supplier and user CFOs, not just in their hearts. We have AI projects that have done that, but most CIOs and CFOs aren’t hearing about them, and that’s making it harder to develop the applications that would truly make the AI business case. So the reality of AI is buried in hype? It sure sounds like AI is more hallucination than reality, but there’s a qualifier. Millions of workers are using AI, and while what they’re currently doing with it isn’t making a real business case, that’s a lot of activity.


Space: The Final Frontier for Cyberattacks

"Since failing to imagine a full range of threats can be disastrous for any security planning, we need more than the usual scenarios that are typically considered in space-cybersecurity discussions," Lin says. "Our ICARUS matrix fills that 'imagineering' gap." Lin and the other authors of the report — Keith Abney, Bruce DeBruhl, Kira Abercromby, Henry Danielson, and Ryan Jenkins — identified several factors as increasing the potential for outer space-related cyberattacks over the next several years and decades. Among them is the rapid congestion of outer space in recent years as the result of nations and private companies racing to deploy space technologies; the remoteness of space; and technological complexity. ... The remoteness — and vastness of space — also makes it more challenging for stakeholders — both government and private — to address vulnerabilities in space technologies. There are numerous objects that were deployed into space long before cybersecurity became a mainstream concern that could become targets for attacks.


The perils of overengineering generative AI systems

Overengineering any system, whether AI or cloud, happens through easy access to resources and no limitations on using those resources. It is easy to find and allocate cloud services, so it’s tempting for an AI designer or engineer to add things that may be viewed as “nice to have” more so than “need to have.” Making a bunch of these decisions leads to many more databases, middleware layers, security systems, and governance systems than needed. ... We need to account for future growth,” but this can often be handled by adjusting the architecture as it evolves. It should never mean tossing money at the problems from the start. This tendency to include too many services also amplifies technical debt. Maintaining and upgrading complex systems becomes increasingly difficult and costly. If data is fragmented and siloed across various cloud services, it can further exacerbate these issues, making data integration and optimization a daunting task. Enterprises often find themselves trapped in a cycle where their generative AI solutions are not just overengineered but also need to be more optimized, leading to diminished returns on investment.
Data fabric is a design concept for integrating and managing data. Through flexible, reusable, augmented, and sometimes automated data integration, or copying of data into a desired target database, it facilitates data access across the business and data analysts. ... Physically moving data can be tedious, involving planning, modeling, and developing ETL/ELT pipelines, along with associated costs. However, a data fabric abstracts these steps, providing capabilities to copy data to a target database. Analysts can then replicate the data with minimal planning, reduced data silos, and enhanced data accessibility and discovery. Data fabric is an abstracted semantic-based data capability that provides the flexibility to add new data sources, applications, and data services without disrupting existing infrastructure. ... As the data volume increases, the fabric adapts without compromising efficiency. Data fabric empowers organizations to leverage multiple cloud providers. It facilitates flexibility, avoids vendor lock-in, and accommodates future expansion across different cloud environments.


DFIR and its role in modern cybersecurity

In incident response, digital forensics provides detailed insights to highlight the cause and sequence of events in breaches. This data is vital for successful containment, eradication of the danger, and recovery. Conducting post-incident forensic reports can similarly enhance security by pinpointing system vulnerabilities and suggesting actions to prevent future breaches. Incorporating digital forensics into incident response essentially allows you to examine incidents thoroughly, leading to faster recovery, enhanced security measures, and increased resilience to cyber threats. This partnership improves your ability to identify, evaluate, and address cyber threats thoroughly. ... Emerging trends and technologies are shaping the future of DFIR in cybersecurity. Artificial intelligence and machine learning are increasing the speed and effectiveness of threat detection and response. Cloud computing is revolutionising processes with its scalable options for storing and analysing data. Additionally, improved coordination with other cybersecurity sectors, such as threat intelligence and network security, leads to a more cohesive defence plan.


Ensuring Application Security from Design to Operation with DevSecOps

DevSecOps is as much about cultural transformation as it is about tools and processes. Before diving into technical integrations, ensure your team’s mindset aligns with DevSecOps principles. Underestimating the cultural aspects, such as resistance to change, fear of increased workload or misunderstanding the value of security, can impede adoption. You can address these challenges by highlighting the benefits of DevSecOps, celebrating successes and promoting a culture of learning and continuous improvement. Developers should be familiar with the nuances of the security tools in use and how to interpret their outputs. ... DevSecOps is a journey, not a destination. Regularly review the effectiveness of your tool integrations and workflows. Gather feedback from all stakeholders and define metrics to measure the effectiveness of your DevSecOps practices, such as the number of vulnerabilities identified and remediated, the time taken to fix critical issues and the frequency of zero-day attacks and other security incidents. 


Essential skills for leaders in industry 4.0

Agility enables swift adaptation to new technologies and market shifts, keeping your organisation competitive and innovative. Digital leaders must capitalise on emerging opportunities and navigate disruptions such as technological advancements, shifting consumer preferences, and increased global competition. ... Effective communication is vital for digital leadership, especially when implementing organisational change. Inspiring positive, incremental change requires empowering your team to work towards common business goals and objectives. Key communication skills include clarity, precision, active listening, and transparency. ... Empathy is essential for guiding your team through digital transformation. True adoption demands conviction from top leaders and a determined spirit throughout the organisation. Success lies in integrating these concepts into the company’s operations and culture. Acknowledge that change can be overwhelming, and by addressing employees' stressors proactively, you can secure their support for strategic initiatives. ... Courage is indispensable for digital leaders, requiring the embrace of risk to ensure success. 


Platform as a Runtime - The Next Step in Platform Engineering

It is almost impossible to ensure that all developers 100% comply with all the system's non-functional requirements. Even a simple thing like input validations may vary between developers. For instance, some will not allow Nulls in a string field, while others allow Nulls, causing inconsistency in what is implemented across the entire system. Usually, the first step to aligning all developers on best practices and non-functional requirements is documentation, build and lint rules, and education. However, in a complex world, we can’t build perfect systems. When developers need to implement new functionality, they are faced with trade-offs they need to make. The need for standardization comes to mitigate scaling challenges. Microservices is another solution to try and handle scaling issues, but as the number of microservices grows, you will start to face the complexity of a Large-Scale Microservices environment. In distributed systems, requests may fail due to network issues. Performance is degraded since requests flow across multiple services via network communication as opposed to in-process method calls in a Monolith. 


The distant world of satellite-connected IoT

The vision is that IoT, and mobile phones, will be designed so that as they cross out of terrestrial connectivity, they can automatically switch to satellite. Devices will no longer be either or, they will be both, offering a much more reliable network as when a device loses contact with the terrestrial network and permanently available alternative can be used. “Satellite is wonderful from a coverage perspective,” says Nuttall. “Anytime you see the sky, you have satellite connectivity. The challenge lies in it being a separate device, and that ecosystem has not really proliferated or grown at scale.” Getting to that point, MacLeod predicts that we will first see people using 3GPP-type standards over satellite links, but they won’t immediately be interoperating. “Things can change, but in order to make the space segment super efficient, it currently uses a data protocol that's referred to as NIDD - non-IP-based data delivery - which is optimized for trickier links,” explains MacLeod. “But NB-IoT doesn’t use it, so the current style of addressing data communication in space isn’t mirrored by that on the ground network. Of course, that will change, but none of us knows exactly how long it will take.”


Navigating the cloud: How SMBs can mitigate risks and maximise benefits

SMBs often make several common mistakes when it comes to cloud security. By recognizing and addressing these blind spots, organizations can significantly enhance their cybersecurity. One major mistake is placing too much trust in the cloud provider. Many IT leaders assume that investing in cloud services means fully outsourcing security to a third party. However, security responsibilities are shared between the cloud service provider (CSP) and the customer. The specific responsibilities depend on the type of cloud service and the provider. Another common error is failing to back up data. Organisations should not assume that their cloud provider will automatically handle backups. It's essential to prepare for worst-case scenarios, such as system failures or cyberattacks, as lost data can lead to significant downtime, productivity, and reputation losses. Neglecting regular patching also exposes cloud systems to vulnerabilities. Unpatched systems can be exploited, leading to malware infections, data breaches, and other security issues. Regular patch management is crucial for maintaining cloud security, just as it is for on-premises systems.



Quote for the day:

"What seems to us as bitter trials are often blessings in disguise." -- Oscar Wilde

Daily Tech Digest - April 15, 2024

Generative AI Strategy For Enterprise

The guidelines to align with enterprise business Initiatives. Identify the business challenges that require attention. Also, understand the business benefits of AI adoption that are critical for the success of enterprise. Select the targeted use cases and perform the Proof of Concepts (POC) that can deliver desired business and operational outcomes. AI use cases should not be viewed in isolation. AI initiatives and technology should be integrated into existing business processes and workflows to optimize and streamline them. Build value through improved productivity, growth, and new business models. ... Prioritize GenAI usecase initiatives based on highest potential value and feasibility to execute. Implement model development lifecycle that includes products and services, rigorous testing, validation, and documentation. Build Roadmap that provides a plan to deliver the identified GenAI applications by prioritizing and simplifying the actions required to deliver identified initiatives. Create processes for ongoing monitoring and auding of GenAI systems for responsible use of AI to ensure compliance with legal, ethical standards and algorithmic biases.


Do cloud-based genAI services have an enterprise future?

“Given the data gravity in the cloud, it is often the easiest place to start with training data. However, there will be a lot of use cases for smaller LLMs and AI inferencing at the edge. Also, cloud providers will continue to offer build-your-own AI platform options via Kubernetes platforms, which have been used by data scientist for years now,” Sustar said. “Some of these implementations will take place in the data center on platforms such as Red Hat OpenShift AI. Meanwhile, new GPU-oriented clouds like Coreweave will offer a third option. This is early days, but managed AI services from cloud providers will remain central to the AI ecosystem.” And while smaller LLMs are on the horizon, enterprises will still use major companies’ AI cloud services for when they need access to very large LLMs, according to Litan. Even so, more organizations will eventually be using small LLMs that run on much smaller hardware, “even as small as a common laptop. “And we will see the rise of services companies that support that configuration along with the privacy, security and risk management services that will be required,” Litan said. 


6 bad cybersecurity habits that put SMBs at risk

Cybersecurity can’t be addressed with technology alone and in many ways it’s a human problem, according to Sage. “Technology enables attacks, technology facilitates preventing attacks, technology helps with cleaning up after an attack, but that technology requires a knowledgeable human to be effective, at least for now,” they say. This also feeds into other problems, which are a lack of budget and no dedicated responsibility for cybersecurity. “These are significant challenges for SMBs, leaving them without guidance on compliance frameworks and a clear direction, and reliant on providers for support,” says Iqbal. ... Adopting good cyber hygiene habits should be a no brainer, although it can be a hit and miss. For instance, allowing the use of weak passwords is all too common, according to Iqbal. He’s also found instances where the default password for logins has not been changed or all the passwords for security servers are changed to a single password and there isn’t a separate administrative password. “The admin account is the most lucrative account threat actors are looking to compromise. It just takes one compromise and then the keys to the kingdom are flung open to all your potential threat actors,” he says.


Generative AI is coming for healthcare, and not everyone’s thrilled

While generative AI shows promise in specific, narrow areas of medicine, experts like Borkowski point to the technical and compliance roadblocks that must be overcome before generative AI can be useful — and trusted — as an all-around assistive healthcare tool. “Significant privacy and security concerns surround using generative AI in healthcare,” Borkowski said. “The sensitive nature of medical data and the potential for misuse or unauthorized access pose severe risks to patient confidentiality and trust in the healthcare system. Furthermore, the regulatory and legal landscape surrounding the use of generative AI in healthcare is still evolving, with questions regarding liability, data protection and the practice of medicine by non-human entities still needing to be solved.” Even Thirunavukarasu, bullish as he is about generative AI in healthcare, says that there needs to be “rigorous science” behind tools that are patient-facing. “Particularly without direct clinician oversight, there should be pragmatic randomized control trials demonstrating clinical benefit to justify deployment of patient-facing generative AI,” he said. 


State of the CIO, 2024: Change makers in the business spotlight

The push for innovation requires a steady hand, and CIOs are stepping in to provide guidance, including orienting the greater enterprise to the potential — and the pitfalls — of new technologies like AI. Eighty-five percent of respondents to the 2024 State of the CIO survey view the CIO as a critical change maker and a much-needed resource given the pace and scale of change, amplified by the frenzy around AI. “With all the hype of AI and the velocity at which technology is evolving, my focus as a CIO continuously and relentlessly has to be through the lens of strategy, execution, and culture,” says Sanjeev Saturru, CIO at Casey’s, the third-largest convenience store chain in the United States. ... “Eighteen months ago, AI was an interesting topic, but today, if you don’t have a plan to elevate experience via AI you are behind,” says LaQuinta. “We have a maniacal focus on maximizing the contribution of advanced intelligence, supported by AI. That could be making information available at the click of a button to help advisors be more efficient with their time or to serve clients better in a hyperpersonalized way.”


Cloned Voice Tech Is Coming for Bank Accounts

At many financial institutions, your voice is your password. Tiny variations in pitch, tone and timbre make human voices unique - apparently making them an ideal method for authenticating customers phoning for service. Major banks across the globe have embraced voice print recognition. It's an ideal security measure, as long as computers can't be trained to easily synthesize those pitch, tone and timbre characteristics in real time. They can. Generative artificial intelligence bellwether OpenAI in late March announced a preview of what it dubbed Voice Engine, technology that with a 15-second audio sample can generate natural-sounding speech "that closely resembles the original speaker." While OpenAI touted the technology for the good it could do - instantaneous language translation, speech therapy, reading assistance - critics' thoughts went immediately to where it could do harm, including in breaking that once ideal authentication method for keeping fraudsters out. It also could supercharge impersonation fraud fueling "child in trouble" and romance scams as well as disinformation.


Data pipelines for the rest of us

In some ways, Airflow is like a seriously upgraded cron job scheduler. Companies start with isolated systems, which eventually need to be stitched together. Or, rather, the data needs to flow between them. As an industry, we’ve invented all sorts of ways to manage these data pipelines, but as data increases, the systems to manage that data proliferate, not to mention the ever-increasing sophistication of the interactions between these components. It’s a nightmare, as the Airbnb team wrote when open sourcing Airflow: “If you consider a fast-paced, medium-sized data team for a few years on an evolving data infrastructure and you have a massively complex network of computation jobs on your hands, this complexity can become a significant burden for the data teams to manage, or even comprehend.” Written in Python, Airflow naturally speaks the language of data. Think of it as connective tissue that gives developers a consistent way to plan, orchestrate, and understand how data flows between every system. A significant and growing swath of the Fortune 500 depends on Airflow for data pipeline orchestration, and the more they use it, the more valuable it becomes. 


The 5 Steps to Crafting an Impactful Enterprise Architecture Communication Strategy

To successfully convey the significance of enterprise architecture within an organization, a structured and strategic approach to communication is crucial. Here’s an overview of the five pivotal steps to create an impactful enterprise architecture communication strategy: Clarify Strategic Objectives: Define clear-cut enterprise architecture objectives that align with the broader vision of the organization. ... Contextual Understanding: Assess the current state of enterprise architecture in your organization and the specific goals you seek to achieve through this communication strategy. ... Audience Insights: Segment your internal audience to understand the varying levels of EA awareness and the distinct needs across departments. ... Selecting Suitable Communication Tools: With a plethora of digital tools available, it’s essential to choose those that best align with your enterprise architecture communication goals. ... Developing the EA Communication Plan: Integrate all insights and choices into a coherent communication plan that outlines how enterprise architecture will be communicated across the organization. 


A Call for Technology Resilience

A major inflection point in application development has been the adoption of Agile. With iterative, Agile application development, an application or system is never finished. It’s continuously changing as business conditions and circumstances change. Both users and IT accept this iterative development without endpoints. On the other hand, end points (and more of them!) in IT projects also can foster technology resilience. They achieve resilience because a large project that gets interrupted by an immediate and overriding business necessity is more easily paused if it is structured as a series of mini projects that deliver incremental functionality. ... Your network goes down under a malware attack, but your network guru has just left the company for another opportunity. Do you have someone who can step in and do the work? Or, what if your DBA leaves? How long can you delay defining an AI data architecture, and will it harm the company competitively? To achieve IT roster depth, staff must be trained in new responsibilities, or at least cross-trained in different roles that they can assume if needed.


SaaS Tools: Major Threat Vector for Enterprise Security

When considering SaaS security risks, organizations have to take into account whether the SaaS provider is an established player or a startup, Lobo said. Established players have the resources to invest heavily in the security of their applications, and are less vulnerable to code injection attacks. Organizations do not have the auditing powers to measure an established vendor's security credentials and have no recourse but to trust the vendor. But when it comes to dealing with smaller companies, organizations can scrutinize encryption and cloud security practices, evaluate supply chains, check for vulnerabilities in the application code and conduct frequent security assessments. Lobo said many organizations today rely on services such as SecurityScorecard, UpGuard and similar companies that keep track of vulnerabilities in enterprise software and alert users, giving them the opportunity to patch third-party software prior to exploitation. Shankar Ramaswamy, solutions director at Bangalore-based IT consultancy giant WiproThe only way to do great work is to love what you do. –Steve Jobs, said organizations using third-party SaaS applications must focus on three major aspects - strengthen endpoint security, minimize the application' access to internal resources and replace passwords with multi factor authentication.



Quote for the day:

"The only way to do great work is to love what you do." -- Steve Jobs