Showing posts with label dark web. Show all posts
Showing posts with label dark web. Show all posts

Daily Tech Digest - January 12, 2026


Quote for the day:

"The people who 'don't have time' and the people who 'always find time' have the same amount of time." -- Unknown



7 challenges IT leaders will face in 2026

IDC’s Rajan says that by the end of the decade organizations will see lawsuits, fines, and CIO dismissals due to disruptions from inadequate AI controls. As a result, CIOs say, governance has become an urgent concern — not an afterthought. ... Rishi Kaushal, CIO of digital identity and data protection services company Entrust, says he’s preparing for 2026 with a focus on cultural readiness, continuous learning, and preparing people and the tech stack for rapid AI-driven changes. “The CIO role has moved beyond managing applications and infrastructure,” Kaushal says. “It’s now about shaping the future. As AI reshapes enterprise ecosystems, accelerating adoption without alignment risks technical debt, skills gaps, and greater cyber vulnerabilities. Ultimately, the true measure of a modern CIO isn’t how quickly we deploy new applications or AI — it’s how effectively we prepare our people and businesses for what’s next.” ... When modernizing applications, Vidoni argues that teams need to stay outcome-focused, phasing in improvements that directly support their goals. “This means application modernization and cloud cost-optimization initiatives are required to stay competitive and relevant,” he says. “The challenge is to modernize and become more agile without letting costs spiral. By empowering an organization to develop applications faster and more efficiently, we can accelerate modernization efforts, respond more quickly to the pace of tech change, and maintain control over cloud expenditures.”


Rethinking OT security for project heavy shipyards

In OT, availability always wins. If a security control interferes with operations, it will be bypassed or rejected, often for good reasons. That constraint forces a different mindset. The first mental shift is letting go of the idea that visibility requires changing the devices themselves. In many legacy environments, that simply isn’t an option. So you have to look elsewhere. In practice, meaningful visibility often starts at the network level, using passive observation rather than active interrogation. You learn what “normal” looks like by watching how systems communicate, not by poking them. ... In our environment, sustainable IT/OT integration means avoiding ad-hoc connectivity altogether. When we connect vessels, yards and on-shore systems, we do so through deliberately designed integration paths. One practical example of this approach is how we use our Triton Guard platform: secure remote access, segmentation and monitoring are treated as integral parts of the digital solution itself, not as optional add-ons introduced later. That allows us to enable innovation while retaining control as IT and OT continue to converge. ... In practice, least privilege means being disciplined about time and purpose. Access should expire by default. It should be linked to a specific task, not to a project or a person’s role in general. We have found that making access removal automatic is often more effective than adding extra approval steps at the front end. If access cannot be explained in one sentence, it probably shouldn’t exist.


Mastering the architecture of hybrid edge environments

A mature IT architecture is characterized by well-orchestrated workflows that enable compute at the edge as well as data exchanges between the edge and central IT. Throughout all processes, security must be maintained. ... Conceptually, creating an IT architecture that incorporates both central IT and the edge sounds easy -- but it isn't. What must be achieved architecturally is a synergistic blend of hardware, software, applications, security and communications that work seamlessly together, whether the technology is at the edge or in the data center. When multiple solutions and vendors are involved, the integration of these elements can be daunting -- but the way that IT can address architectural conflicts upfront is by predefining the interface protocols, devices, and the hardware and software stacks. ... The hybrid approach is a win-win for everyone. It gives users a sense of autonomy, and it saves IT from making frequent trips to remote sites. The key to it all is to clearly define the roles that IT and end users will play in edge support. In other words, what are end-user technical support people in charge of, and at what point does IT step in? ... Finally, a mature architecture must define disaster recovery. What happens if a remote edge site fails? A mature architecture must define where it fails over to, so the site can keep going even if its local systems are out. In these cases, data and systems must be replicated for redundancy in the cloud or in the corporate data center, so remote sites can fail over to these resources, with end-to-end security in place at all points.


The Push for Agentic AI Standards Is Well Underway

"Many existing trust frameworks were layered onto an internet never designed for machine-level delegation or accountability. As agents begin acting independently, those frameworks need to evolve rather than simply be imposed," Hazari said, who authored the book "The Internet of Agents: The Next Evolution of AI and the Future of Digital Interaction." The agentic AI standards debate ranges from adopting enforceable guardrails to ensuring interoperability. Hazari pointed out that innovation is already moving faster than formal standard-setting can go. Fragmentation is a natural phase that precedes consolidation and interoperability. ... The Agentic AI Foundation brings together early but influential agentic technologies from Amazon Web Services, Microsoft and Google. These hyperscalers are rolling out controlled AI environments often described as "AI factories" designed to deliver AI compute at enterprise scale. Initial contributions to the foundation include Anthropic's Model Context Protocol, which focuses on standardizing how agents receive and structure context; goose, an open-source agentic framework contributed by Block; and AGENTS.md from OpenAI, which defines how agents describe capabilities, permissions and constraints. Rather than prescribing a single architecture, these projects aim to standardize interfaces and metadata areas where fragmentation is already creating friction. Hazari said initiatives like the Agentic AI Foundation can absorb patterns into shared frameworks as they emerge.


7 steps to move from IT support to IT strategist

The biggest obstacle holding IT professionals back is a passive mindset. Sitting back and waiting to be told what to do prevents IT teams from reaching the strategic partnership level they want, said Eric Johnson ... Noe Ramos, vice president of AI operations at Agiloft, emphasized that strong IT leaders see their work as part of a bigger ecosystem, one that works best when people are open, share information, and collaborate. ... IT professionals need to show up as partners by truly understanding what’s going on in the business, rather than waiting for business stakeholders to come to them with problems to solve, PagerDuty’s Johnson said. “When you’re engaging with your business partners, you’re bringing proactive ideas and solutions to the table,” he said. ... Rather than having an order-taking mindset, IT professionals should ask probing questions about what partners need and what’s driving that need, which shifts toward problem-solving and focuses on outcomes rather than just implementing solutions, DeTray said. ... “IT professionals should frame every initiative in terms of the business problem it solves, the risk it reduces, or the opportunity it unlocks,” he said. ... Johnson warns against constantly searching for home runs. “Those are harder to find and they’re harder to deliver on,” he said. “Within 30 to 60 days, IT pros can build understanding around metrics and target states, then look for opportunities to help, even if they start small.”


Spec Driven Development: When Architecture Becomes Executable

The name Spec Driven Development may suggest a methodology, akin to Test Driven Development. However, this framing undersells its significance. SDD is more accurately understood as an architectural pattern, one that inverts the traditional source of truth by elevating executable specifications above code itself. SDD represents a fundamental shift in how software systems are architected, governed, and evolved. At a technical level, it introduces a declarative, contract-centric control plane that repositions the specification as the system's primary executable artifact. Implementation code, in contrast, becomes a secondary, generated representation of architectural intent. ... For decades, software architecture has operated under a largely unchallenged assumption that code is the ultimate authority. Architecture diagrams, design documents, interface contracts, and requirement specifications all existed to guide implementation. However, the running system always derived its truth from what was ultimately deployed. When mismatches occurred, the standard response was to "update the documentation" SDD inverts this relationship entirely. The specification becomes the authoritative definition of system reality, and implementations are continuously derived, validated, and, when necessary, regenerated to conform to that truth. This is not a philosophical distinction; it is a structural inversion of the governance of software systems.


Decoupling architectures: building resilience against cyber attacks

The recent incidents are tied together by a common approach to digital infrastructure: tightly coupled architectures. In these environments, critical applications such as ERP, warehouse, logistics, retail, finance are interconnected so closely that if one fails, other critical systems are unable to function. A single weak point becomes the domino that topples the rest. This design may have made sense in a simpler, more predictable IT world. But in today’s highly interconnected landscape, with constantly evolving threats accelerated thanks to the AI revolution, this once-efficient design has turned into the perfect setup for system-wide issues. ... Instead of linking systems directly, a decoupled architecture provides a shared backbone where each system publishes what happens. That means if one system is compromised or taken offline during an incident, the others can continue to function. Business operations don’t have to come to a standstill simply because a single component is isolated — and when the affected system is restored, it can replay the missed events and rejoin the flow seamlessly. Some architectures, like event-driven data streaming, can keep that data flowing in real time despite an attack. ... For CIOs and CISOs, this shift in mindset is critical. Cyber resilience is no longer just about perimeter defense or detection tools. It’s about designing systems that can limit the blast radius when hit. absorbing and isolating the damage to ensure a quick recovery.


AI, geopolitics & supply chains reshape cyber risk

Organisations are scaling AI in core operations, customer engagement and decision-making. This expansion is exposing new attack surfaces, including data inputs, model training pipelines and integration points with legacy systems. It also coincides with uncertain regulatory expectations on issues such as transparency, auditability and the handling of personal and sensitive data in machine learning models. ... Map the above challenges alongside the geopolitical fragmentation the WEF report highlights, cyber risk is really being challenged in ways many traditional compliance frameworks were not designed for, via issues such as sovereignty, supply-chain and third-party exposure. In this environment, resilience absolutely depends on an organisation's ability to integrate cyber security, information security, privacy, and AI governance into a single risk picture, and to connect that with their technology decisions, regulatory obligations, business impact, and geopolitical context. ... Hardware, software and cloud services now rely on dispersed design, manufacturing and operational ecosystems. Attackers exploit this complexity. They target upstream providers, third-party tools and managed services.  ... Regulatory fragmentation around AI is emerging alongside an increase in reported misuse. This includes deepfakes, automated disinformation, fraud, model theft and prompt injection attacks, as well as concerns over opaque automated decision-making.


Five key priorities for CEOs & Governance practitioners in 2026

As Banking and Fintech industries are embracing cutting edge technologies, without a skilled workforce to implement these technological solutions, the financial services industry will suffer a lot. According to IDC, IT skills shortage is expected to impact 9 out of 10 organizations by 2026 with a cost of $5.5 trillion in delays, issues, and revenue loss. Thus, CEOs and governance professionals should take up skills management as their top priority ... AI’s explainability and transparency are to be addressed on priority. Finally, AI is creating lots of environmental impacts contributing to greenhouse gas emissions due to its high energy and water consumption, which leads to the Environmental, social, governance (ESG) issues to be focused on by governance professionals. ... CEOs and governance professionals must take measures towards preemptive cybersecurity. They should realise that cybersecurity gives the foundation of trust for all the stakeholders of any enterprise and they cannot afford to compromise on it. ... Traditional strategic planning involved fixed, long-term goals, detailed forecasts, and periodic reviews. This is not suitable in the face of constant disruption. Agile strategic planning by contrast is having short planning cycles, incremental objectives, and adaptive learning. ... The future of information systems management lies in the seamless integration of cloud and edge computing – a distributed intelligent architecture where data is processed wherever it is more efficient to do so.


Dark Web Intelligence: How to Leverage OSINT for Proactive Threat Mitigation

Experts say monitoring the dark web is an early warning system. Threat actors trade stolen data or exploits before they are detected in the broader world. Security pros even call dark web monitoring an ‘early warning radar’ that flags when sensitive data is leaked in underground forums. The difference is huge: Without these signals, breaches go undetected for months. In fact, one report found that the average breach goes undiscovered for about 194 days without proactive measures. ... Gathering intel from the dark web requires specialized tools and techniques. Analysts use a combination of OSINT tools and commercial intelligence platforms. Basic breach-checkers (public data-leak search engines) will flag obvious exposures, but comprehensive coverage requires purpose-built scanners that constantly crawl underground forums and encrypted chat networks. ... Organizations of all sizes have seen real benefits of dark web monitoring. For example, in 2020, Marriott International identified a potential supply-chain breach when threat researchers discovered guest data being sold on some underground forums. Getting that early heads up allowed Marriott to get in and investigate and inform affected customers before the incident became public. Similarly, after 700 million LinkedIn profiles got scraped in 2021, the first samples of the stolen data started popping up on dark web marketplaces and got caught by monitoring tools. Those alerts prompted LinkedIn users to reset their passwords and enabled the company to sort out its credential abuse defenses.

Daily Tech Digest - August 13, 2025


Quote for the day:

“You don’t lead by pointing and telling people some place to go. You lead by going to that place and making a case.” -- Ken Kesey


9 things CISOs need know about the dark web

There’s a growing emphasis on scalability and professionalization, with aggressive promotion and recruitment for ransomware-as-a-service (RaaS) operations. This includes lucrative affiliate programs to attract technically skilled partners and tiered access enabling affiliates to pay for premium tools, zero-day exploits or access to pre-compromised networks. It’s fragmenting into specialized communities that include credential marketplaces, exploit exchanges for zero-days, malware kits, and access to compromised systems, and forums for fraud tools. Initial access brokers (IABs) are thriving, selling entry points into corporate environments, which are then monetized by ransomware affiliates or data extortion groups. Ransomware leak sites showcase attackers’ successes, publishing sample files, threats of full data dumps as well as names and stolen data of victim organizations that refuse to pay. ... While DDoS-for-hire services have existed for years, their scale and popularity are growing. “Many offer free trial tiers, with some offering full-scale attacks with no daily limits, dozens of attack types, and even significant 1 Tbps-level output for a few thousand dollars,” Richard Hummel, cybersecurity researcher and threat intelligence director at Netscout, says. The operations are becoming more professional and many platforms mimic legitimate e-commerce sites displaying user reviews, seller ratings, and dispute resolution systems to build trust among illicit actors.


CMMC Compliance: Far More Than Just an IT Issue

For many years, companies working with the US Department of Defense (DoD) treated regulatory mandates including the Cybersecurity Maturity Model Certification (CMMC) as a matter best left to the IT department. The prevailing belief was that installing the right software and patching vulnerabilities would suffice. Yet, reality tells a different story. Increasingly, audits and assessments reveal that when compliance is seen narrowly as an IT responsibility, significant gaps emerge. In today’s business environment, managing controlled unclassified information (CUI) and federal contract information (FCI) is a shared responsibility across various departments – from human resources and manufacturing to legal and finance. ... For CMMC compliance, there needs to be continuous assurance involving regularly monitoring systems, testing controls and adapting security protocols whenever necessary. ... Businesses are having to rethink much of their approach to security because of CMMC requirements. Rather than treating it as something to be handed off to the IT department, organizations must now commit to a comprehensive, company-wide strategy. Integrating thorough physical security, ongoing training, updated internal policies and steps for continuous assurance mean companies can build a resilient framework that meets today’s regulatory demands and prepares them to rise to challenges on the horizon.


Beyond Burnout: Three Ways to Reduce Frustration in the SOC

For years, we’ve heard how cybersecurity leaders need to get “business smart” and better understand business operations. That is mostly happening, but it’s backwards. What we need is for business leaders to learn cybersecurity, and even further, recognize it as essential to their survival. Security cannot be viewed as some cost center tucked away in a corner; it’s the backbone of your entire operation. It’s also part of an organization’s cyber insurance – the internal insurance. Simply put, cybersecurity is the business, and you absolutely cannot sell without it. ... SOCs face a deluge of alerts, threats, and data that no human team can feasibly process without burning out. While many security professionals remain wary of artificial intelligence, thoughtfully embracing AI offers a path toward sustainable security operations. This isn’t about replacing analysts with technology. It’s about empowering them to do the job they actually signed up for. AI can dramatically reduce toil by automating repetitive tasks, provide rapid insights from vast amounts of data, and help educate junior staff. Instead of spending hours manually reviewing documents, analysts can leverage AI to extract key insights in minutes, allowing them to apply their expertise where it matters most. This shift from mundane processing to meaningful analysis can dramatically improve job satisfaction.


7 legal considerations for mitigating risk in AI implementation

AI systems often rely on large volumes of data, including sensitive personal, financial and business information. Compliance with data privacy laws is critical, as regulations such as the European Union’s General Data Protection Regulation, the California Consumer Privacy Act and other emerging state laws impose strict requirements on the collection, processing, storage and sharing of personal data. ... AI systems can inadvertently perpetuate or amplify biases present in training data, leading to unfair or discriminatory outcomes. This risk is present in any sector, from hiring and promotions to customer engagement and product recommendations. ... The legal framework surrounding AI is evolving rapidly. In the U.S., multiple federal agencies, including the Federal Trade Commission and Equal Employment Opportunity Commission, have signaled they will apply existing laws to AI use cases. AI-specific state laws, including in California and Utah, have taken effect in the last year. ... AI projects involve unique intellectual property questions related to data ownership and IP rights in AI-generated works. ... AI systems can introduce new cybersecurity vulnerabilities, including risks related to data integrity, model manipulation and adversarial attacks. Organizations must prioritize cybersecurity to protect AI assets and maintain trust.


Forrester’s Keys To Taming ‘Jekyll and Hyde’ Disruptive Tech

“Disruptive technologies are a double-edged sword for environmental sustainability, offering both crucial enablers and significant challenges,” explained the 15-page report written by Abhijit Sunil, Paul Miller, Craig Le Clair, Renee Taylor-Huot, Michele Pelino, with Amy DeMartine, Danielle Chittem, and Peter Harrison. “On the positive side,” it continued, “technology innovations accelerate energy and resource efficiency, aid in climate adaptation and risk mitigation, monitor crucial sustainability metrics, and even help in environmental conservation.” “However,” it added, “the necessary compute power, volume of waste, types of materials needed, and scale of implementing these technologies can offset their benefits.” ... “To meet sustainability goals with automation and AI,” he told TechNewsWorld, “one of our recommendations is to develop proofs of concept for ‘stewardship agents’ and explore emerging robotics focused on sustainability.” When planning AI operations, Franklin Manchester, a principal global industry advisor at SAS, an analytics and artificial intelligence software company in Cary, N.C., cautioned, “Not every nut needs to be cracked with a sledgehammer.” “Start with good processes — think lean process mapping, for example — and deploy AI where it makes sense to do so,” he told TechNewsWorld.


5 Key Benefits of Data Governance

Data governance processes establish data ethics, a code of behavior providing a trustworthy business climate and compliance with regulatory requirements. The IAPP calculates that 79% of the world’s population is now protected under privacy regulations such as the EU’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA). This statistic highlights the importance of governance frameworks for risk management and customer trust. ... Data governance frameworks recognize data governance roles and responsibilities and streamline processes so that corporate-wide communications can improve. This systematic approach sets up businesses to be more agile, increasing the “freedom to innovate, invest, or hunker down and focus internally,” says O’Neal. For example, Freddie Mac developed a solid data strategy that streamlined data governance communications and later had the level of buy-in for the next iteration. ... With a complete picture of business activities, challenges, and opportunities, data governance creates the flexibility to respond quickly to changing needs. This allows for better self-service business intelligence, where business users can gather multi-structured data from various sources and convert it into actionable intelligence.


Architecture Lessons from Two Digital Transformations

The prevailing mindset was that of “Don’t touch what isn’t broken”. This approach, though seemingly practical, reflected a deeper inertia, rooted in a cash-strapped culture and leadership priorities that often leaned towards prestige over progress. Over the years, the organization had acquired others in an attempt to grow its customer base. These mergers and acquisitions lead to inheritance of a lot more legacy estate. The mess burgeoned to an extent that they needed a transformation, not now, but yesterday! That is exactly where the Enterprise Architecture practice comes into picture. Strategically, a green field approach was suggested. A brand-new system from scratch, that has modern data centers for the infrastructure, cloud platforms for the applications, plug and play architecture or composable architecture as it is better known, for technology, unified yet diversified multi-branding under one umbrella and the whole works. Where things slowly started taking a downhill turn is when they decided to “outsource” the entire development of this new and shiny platform to a vendor. The reasoning was that the organization did not want to diversify from being a banking institution and turn into an IT heavy organization. They sought experienced engineering teams who could hit the ground running and deliver in 2 years flat.


Cloud security in multi-tenant environments

The most useful security strategy in a multi-tenant cloud environment comes from cultivating a security-first culture. It is important to educate the team on the intricacies of the cloud security system, implementing stringent password and authentication policies, thereby promoting secure practices for development. Security teams and company executives may reduce the possible effects of breaches and remain ready for changing threats with the support of event simulations, tabletop exercises, and regular training. ... As we navigate the evolving landscape of enterprise cloud computing, multi-tenant environments will undoubtedly remain a cornerstone of modern IT infrastructure. However, the path forward demands more than just technological adaptation – it requires a fundamental shift in how we approach security in shared spaces. Organizations must embrace a comprehensive defense-in-depth strategy that transcends traditional boundaries, encompassing everything from robust infrastructure hardening to sophisticated application security and meticulous user governance. The future of cloud computing need not present a binary choice between efficiency and security. ... By placing security at the heart of multi-tenant operations, organizations can fully harness the transformative power of cloud technology while protecting their most critical assets 


This Big Data Lesson Applies to AI

Bill Schmarzo was one of the most vocal supporters of the idea that there were no silver bullets, and that successful business transformation was the result of careful planning and a lot of hard work. A decade ago, the “Dean of Big Data” let this publication in on secret recipe he would use to guide his clients. He called it the SAM test, and it allowed business leaders to gauge the viability of new IT projects through three lenses.First, is the new project strategic? That is, will it make a big difference for the company? If it won’t, why are you investing lots of money? Second, is the proposed project actionable? You might be able to get some insight with the new tech, but can your business actually do anything with it? Third, is the project material? The new project might technically be feasible, but if the costs outweigh the benefits, then it’s a failure. Schmarzo, who is currently working as Dell’s Customer AI and Data Innovation Strategist, was also a big proponent of the importance of data governance and data management. The same data governance and data management bugaboos that doomed so many big data projects are, not surprisingly, raising their ugly little heads in the age of AI. Which brings us to the current AI hype wave. We’re told that trillions of dollars are on the line with large language models, that we’re on the cusp of a technological transformation the likes of which we have never seen. 


Sovereign cloud and digital public infrastructure: Building India’s AI backbone

India’s Digital Public Infrastructure (DPI) is an open, interoperable platform that powers essential services like identity and payments. It comprises foundational systems that are accessible, secure, and support seamless integration. In practice, this has taken shape as the famous “India Stack.” ... India’s digital economy is on an exciting trajectory. A large slice of that will be AI-driven services like smart agriculture, precision health, financial inclusion, and more. But to fully capitalize on this opportunity, we need both rich data and trusted compute. DPI provides vast amounts of structured data (financial records, IDs, health info) and access channels. Combining that with a sovereign cloud means we can turn data into insight on Indian soil. Indian regulators now view data itself as a strategic asset and fuel for AI. AI pilots (e.g., local-language advisory bots) are already being built on top of DPI platforms (UPI, ONDC, etc.) to deliver inclusive services. And the government has even subsidized thousands of GPUs for researchers. But all this computing and data must be hosted securely. If our AI models and sensitive datasets live on foreign soil, we remain vulnerable to geopolitical shifts and export controls. ... Now, policy is catching up with sovereignty. In 2023, the new Digital Personal Data Protection (DPDP) Act formally mandated local storage for sensitive personal data. 

Daily Tech Digest - July 27, 2023

'FraudGPT' Malicious Chatbot Now for Sale on Dark Web

Both WormGPT and FraudGPT can help attackers use AI to their advantage when crafting phishing campaigns, generating messages aimed at pressuring victims into falling for business email compromise (BEC), and other email-based scams, for starters. FraudGPT also can help threat actors do a slew of other bad things, such as: writing malicious code; creating undetectable malware; finding non-VBV bins; creating phishing pages; building hacking tools; finding hacking groups, sites, and markets; writing scam pages and letters; finding leaks and vulnerabilities; and learning to code or hack. Even so, it does appear that helping attackers create convincing phishing campaigns is still one of the main use cases for a tool like FraudGPT, according to Netenrich. ... As phishing remains one of the primary ways that cyberattackers gain initial entry onto an enterprise system to conduct further malicious activity, it's essential to implement conventional security protections against it. These defenses can still detect AI-enabled phishing, and, more importantly, subsequent actions by the threat actor.


Key factors for effective security automation

A few factors generally drive the willingness to automate security. One factor is if the risk of not automating exceeds the risk of an automation going wrong: If you conduct business in a high-risk environment, the potential for damage when not automating can be higher than the risk of triggering an automated response based on a false positive. Financial fraud is a good example, where banks routinely automatically block transactions they find to be suspicious, because a manual process would be too slow. Another factor is when the damage potential of an automation going wrong is low. For example, there is no potential damage when trying to fetch a non-existent file from a remote system for forensic analysis. But what really matters most is how reliable automation is. For example, many threats actors today use living-off-the-land techniques, such as using common and benign system utilities like PowerShell. From a detection perspective, there are no uniquely identifiable characteristics like a file hash, or a malicious binary to inspect in a sandbox. 


API-First Development: Architecting Applications with Intention

More traditionally, tech companies often started with a particular user experience in mind when setting out to develop a product. The API was then developed in a more or less reactive way to transfer all the necessary data required to power that experience. While this approach gets you out the door fast, it isn’t very long before you probably need to go back inside and rethink things. Without an API-first approach, you feel like you’re moving really fast, but it’s possible that you’re just running from the front door to your driveway and back again without even starting the car. API-first development flips this paradigm by treating the API as the foundation for the entire software system. Let’s face it, you are probably going to want to power more than one developer, maybe even several different teams, all possibly even working on multiple applications, and maybe there will even be an unknown number of third-party developers. Under these fast-paced and highly distributed conditions, your API cannot be an afterthought.


What We Can Learn from Australia’s 2023-2030 Cybersecurity Strategy

One of the challenges facing enterprises in Australia today is a lack of clarity in terms of cybersecurity obligations, both from an operational perspective and as organizational directors. Though there are a range of implicit cybersecurity obligations designated to Australian enterprises and nongovernment entities, it is the need of the hour to have more explicitly stated obligations to increase national cyberresilience.There are also opportunities to simplify and streamline existing regulatory frameworks to ensure easy adoption of those frameworks and cybersecurity obligations. ... Another important aspect of the upcoming Australian Cybersecurity Strategy is to strengthen international cyberleaders to enable them to seize opportunities and address challenges presented by the shifting cyberenvironment. To keep up with new and emerging technologies, this cybersecurity strategy aims to take tangible steps to shape global thinking about cybersecurity.


Is your data center ready for generative AI?

Generative AI applications create significant demand for computing power in two phases: training the large language models (LLMs) that form the core of generate AI systems, and then operating the application with these trained LLMs, says Raul Martynek, CEO of data center operator DataBank. “Training the LLMs requires dense computing in the form of neural networks, where billions of language or image examples are fed into a system of neural networks and repeatedly refined until the system ‘recognizes’ them as well as a human being would,” Martynek says. Neural networks require tremendously dense high-performance computing (HPC) clusters of GPU processors running continuously for months, or even years at a time, Martynek says. “They are more efficiently run on dedicated infrastructure that can be located close to the proprietary data sets used for training,” he says. The second phase is the “inference process” or the use of these applications to actually make inquiries and return data results.


Siloed data: the mountain of lost potential

Given AI’s growing capabilities for handling customer service are only made possible through data, the risk of not breaking down internal data siloes is sizeable, not just in terms of missing opportunities. Companies could also see a decline in the speed and quality of their customer service as contact centre agents need to spend longer navigating multiple platforms and dashboards to find the information needed to help answer customers’ queries. Eliminating data siloes requires educating everyone in the business to understand the necessity of sharing data through an open culture and encouraging the data sides of operations to co-ordinate efforts, align visions and achieve goals. The synchronisation of business operations with customer experience, alongside adopting a data-driven approach, can produce significant benefits such as increased customer spending. ... Data, working for and with AI, must be placed at the centre of the business model. This means getting board buy-in to establish a data factory run by qualified data engineers and analysts who are capable of driving the collection and use of data within the organisation.


An Overview of Data Governance Frameworks

Data governance frameworks are built on four key pillars that ensure the effective management and use of data across an organization. These pillars ensure data is accurate, can be effectively combined from different sources, is protected and used in compliance with laws and regulations, and is stored and managed in a way that meets the needs of the organization. ... Furthermore, a lack of governance can lead to confusion and duplication of effort, as different departments or individual users try to manage data with their own methods. A well-designed data governance framework ensures all users understand the rules for managing data and that there is a clear process for making changes or additions to the data. It unifies teams, improving communication between different teams and allowing different departments to share best practices. In addition, a data governance framework ensures compliance with laws and regulations. From HIPAA to GDPR, there are a multitude of data privacy laws and regulations all over the world. Running afoul of these legal provisions is expensive in terms of fines and settlement costs and can damage an organization’s reputation.


Governance — the unsung hero of ESG

What's interesting is that for the most part, they're all at different stages of transformation and managing the risks of transformation. A board has four responsibilities, observing performance, approving, and providing resources to fund the strategy, hiring and developing the succession plan, and risk management. Depending on where you are in a normal cycle of a business or the market, the board is involved in these 4. Also, I take lessons that I've learned at other boards and apply them possibly to Baker Hughes' situation and vice versa: take some of the lessons that I'm learning and the things that I'm hearing in the Baker Hughes situation — unattributed, of course — and bring it into other boards. Sometimes there's a nice element of sharing. As you know, Baker Hughes has a very strong Board and I am a good student at taking down good and thoughtful questions from board members and bringing that to other company boards, if appropriate.


Why whistleblowers in cybersecurity are important and need support

“Governments should have a whistleblower program with clear instructions on how to disclose information, then offer the resources to enable procedures to encourage employees to come forward and guarantee a safe reporting environment,” she says. Secondly, nations need to upgrade their legislation to include strong anti-retaliation protection against tech workers, making it unlawful for various entities to engage in reprisal. This includes job-related pressure, harassment, doxing, blacklisting, and retaliatory investigations. ... To further increase chances, employees can be offered regular training sessions in which they are informed about the importance of coming forward on cybersecurity issues, the ways to report wrongdoing, and the protection mechanisms they could access. Moreover, leadership should explain that it has zero tolerance for retaliation. “Swift action should be taken if any instances of retaliation come to light,” according to Empower Oversight. The message leadership should convey is that issues are taken seriously and that C-level executives are open for conversation if the situation requires such an action.


Cloud Optimization: Practical Steps to Lower Your Bills

Optimization is always an iterative process, requiring continual adjustment as time goes on. However, there are many quick wins and strategies that you can implement today to refine your cloud footprint:Unused virtual machines (VMs), storage and bandwidth can lead to unnecessary expenses. Conducting periodic evaluations of your cloud usage and identifying such underutilized resources can effectively minimize costs. Check your cloud console now. You might just find a couple of VMs sitting there idle, accidentally left behind after the work was done. Temporary backup resources, such as VMs and storage, are frequently used for storing data and application backups. Automate the deletion process of these temporary backup resources to save money. Selecting the appropriate tier entails choosing the cloud resource that aligns best with your requirements. For instance, if you anticipate a high volume of traffic and demand, opting for a high-end VM would be suitable. Conversely, for smaller projects, a lower-end VM might suffice. 



Quote for the day:

"Your job gives you authority. Your behavior gives you respect." -- Irwin Federman

Daily Tech Digest - January 14, 2023

How to build the most impactful engineering team without adding more people

Teams celebrate a 10% improvement in efficiency when they should be looking for a 10x improvement in efficiency. Identify key moments in your product lifecycle when it makes sense to step back and identify the substantial changes that can supercharge productivity. My company builds connectors into a huge variety of data sources. At one time, we were writing 5,000 lines of code to create a single connector, which was not sustainable. Now, a single engineer can build a connector in a week with 100 lines of code. We achieved this by designing a new development framework that allows us to exploit commonalities across the connectors we build and by greatly reducing dependencies among engineers. As soon as one engineer needs input from six other engineers to complete a task, productivity takes a massive hit. Here's a thought experiment you can run to help find your own 10x improvement: Imagine your workload scales 10x overnight, and you absolutely must meet this increase without hiring more engineers or working additional hours. How do you do it? An out-of-the-box thought exercise like this can help you radically improve your approach.


Your project is unique, so why make it replicable?

While replicability isn’t as important as delivery in a modern environment, where software is often unique to the organisation, it is important to be able to prove effectiveness. At Catapult, we use an upskilling system that we call the Lighthouse Model; whereby we identify a team from the ground-up that can act as a model for the rest of the business and focus first on developing them as a group. By demonstrating the effectiveness of agile as a foundation on which to build software, a Lighthouse team creates a fertile environment, which removes blocks and gathers data to help develop buy-in across the board. All this works. In 2018, the Standish Group established that ‘Agile projects’ are twice as likely to succeed than waterfall projects. In the same study the company notes that 28 per cent of Waterfall projects fail, while only eleven per cent of agile projects meet the same fate. In this context, the metrics of success went beyond whether the project was on time and on budget and considered its outcomes and impact. They looked beyond the delivery against the plan to include the value delivered and customer satisfaction. In essence, they looked for the real meaning of success.


A New Definition of Reliability

The first thing you might assume is that reliability is synonymous with availability. After all, if a service is up 99% of the time, that means a user can rely on it 99% of the time, right? Obviously, this isn’t the whole story, but it’s worth exploring why. For starters, these simple system health metrics aren’t really so “simple.” Starting with just the Four Golden Signals, you’ll end up with the latency, resource saturation, error rate, and uptime of all your different services. For a complex product, this adds up to a whole lot of numbers. How do you combine and weigh all these metrics? Which are the important ones to watch and prioritize? Judging things like errors and availability can be difficult too. Gray failure, or when a service isn’t working completely but hasn’t totally failed either, can be hard to capture with quantitative metrics. When do you decide when a service is “available enough?” What about a situation where your service performs exactly as intended, but doesn’t align with your customers’ expectations? How do you capture these in your picture of system health? Clearly, there needs to be another layer to this definition of reliability!


Architecture Pitfalls: Don’t use your ORM entities for everything — embrace the SQL!

I suspect one of the greatest lies ever told in web application development is that if you use an ORM you can avoid writing and understanding SQL, “it’s just an implementation detail”. That might be true at first, but once you go beyond the basics that falls away quickly. ... It’s much better to let the database do this kind of filtering. After all, it’s what all of the clever folk who work on databases spend a lot of time and effort optimising. For most ORMs you have the option of writing analogues to SQL which can get you quite a long way. For example, JPA has JPQL and Hibernate has HQL. These let you build abstracted queries that should work on all databases that your ORM supports. The implication of this is that your team needs to embrace SQL and understand how to use it, rather than avoiding it by using application code instead. To dispel a common source of anxiety on this: you don’t need to be a SQL guru to get started and become familiar with what you will need for the vast majority of your implementation requirements. There are also excellent resources and books available, I will link some below. 


How To Build A Network Of Security Champions In Your Organization

An SCP enlists employees from all different disciplines across a company (HR, marketing, finance, etc.) for focused cybersecurity training and guidance. These security champions then become the contact point and voice for cybersecurity within their various departments or offices alongside their main role. They help to advise on, embed and reinforce good security practices with their colleagues. This makes security advice more relatable and accessible, avoiding the “us versus them” attitude that can sometimes exist between employees and traditional enterprise security teams. It’s easier for a colleague to explain a security risk or issue to a co-worker than it is for a security pro whom the co-worker has never met. The security champion’s role is a little like that of a department’s fire marshal. In the same way that the marshal doesn’t need to be a specialist in firefighting, the security champion doesn’t need to be an IT or infosec pro; they just need to know how their colleagues work, what the security risks are within their department or team and the common-sense steps to take to mitigate those risks. 


Companies warned to step up cyber security to become ‘insurable’

Carolina Klint, risk management leader for continental Europe for insurance broker Marsh, and one of the contributors to the report said that insurance companies were now coming out and saying that “cyber risk is systemic and uninsurable”. That means, in future, companies may not be able to find cover for risks such as ransomware, malware or hacking attacks. “It’s up to the insurance industry and to the capital markets whether or not they find the risk palatable,” she said in an interview with Computer Weekly, “but that is the direction it is moving in.” In recent days, cyber attacks have disrupted the international delivery services of the Royal Mail and infected IT systems at the Guardian newspaper with ransomware. The Global risks report rates cyber warfare and economic conflict as more serious threats to stability than the risks of military confrontation. “There is a real risk that cyber attacks may be targeted at critical infrastructure, health care and public institutions,” said Klint. “And that would have dramatic ramifications in terms of stability.”


6 Roles That Can Easily Transition to a Cybersecurity Team

Software engineers possess various technical skills, including coding and software development. They also understand the complexities involved in developing a secure application. This makes them well-suited for different types of cybersecurity tasks. ... They should also be familiar with various cyber threats, such as malware and phishing. Additionally, since software development is constantly evolving, software engineers should be prepared to keep up with the latest trends to remain competitive. ... Network architects possess a strong knowledge of networking technologies and are proficient in setting up secure networks. While not all security roles require a deep technical understanding, network architects are well-suited to design secure networks and implement protection measures. They can also review existing systems for vulnerabilities and recommend solutions to mitigate risks. ... They should also be familiar with emerging technologies and techniques related to cybersecurity, such as artificial intelligence (AI) and machine learning (ML). Another important skill for network architects is identifying and differentiating between legitimate and malicious traffic signals.


Getting started with data science and machine learning: what architects need to know

In almost every scientific field, the role of the data scientist is actually played by a physicist, chemist, psychologist, mathematician (for numerical experiments), or some other domain expert. They have a deep understanding of their field and pick up the necessary techniques to analyze their data. They have a set of questions they want to ask and know how to interpret the results of their models and experiments. With the increasing popularity of industrial data science and the rise of dedicated data science educational programs, a typical data scientist's training lacks domain-specific training. ... There are two opposing approaches. One is to know which tool to use, pick up a pre-implemented version online, and apply it to a problem. This is a very reasonable approach for most practical problems. The other is to deeply understand how and why something works. This approach takes much more time but offers the advantage of modifying or extending the tool to make it more powerful.


ZeroOps Helps Developers Manage Operational Complexity

The first thing to take into account when implementing ZeroOps for your business: You must consider everything that isn’t directly driving value. Who should be doing those tasks? You want your core staff to be focused on the business, so it’s worth considering a managed service provider as a partner. This can help provide your team with the skills and support they need, while allowing them to focus on their core competencies. The right tools can help your team be more productive than you ever imagined, without hiring new full-time employees. ... More agile, with less pressure and responsibility to handle “the little things” that we know aren’t so little. Imagine how your team members could shine when supported by experts to assist them so they can focus on providing value. Imagine being able to deliver projects much more quickly so delivery expectations actually aligned with what was realistic. ... Managed services can help make your team more productive and capitalize on their talent. When you struggle with a problem, it’s likely that your managed service provider has already solved it for others so you don’t have to reinvent the wheel.


Dark Web Monitoring For Law Firms: Is It Worthwhile?

One real value for a dark web scan is awareness. You should be able to obtain an initial dark web scan free of charge – without paying an ongoing monthly monitoring fee, which we certainly don’t recommend. The initial report will help identify if you have law firm employees that tend to reuse the same password across multiple sites. It may even identify sites you were not aware of so that you can immediately change the password. Use the dark web scan to educate employees at your next cybersecurity awareness training session. If you’re not teaching your employees about cybersecurity, at least annually, you are missing a very significant part of cyber resilience! A human element is involved in data breaches 82% of the time. Take control of your data and don’t hand it over to a monitoring service. You should be using a password manager and a unique password for each website or application you use. Put a freeze on your credit file at the three major credit bureaus. Freezing your credit file is free. 



Quote for the day:

"The test we must set for ourselves is not to march alone but to march in such a way that others will wish to join us." -- Hubert Humphrey

Daily Tech Digest - January 22, 2022

The Rise of the Technical CEO

Today’s environment is a unique one for leaders. Businesses cannot afford for leadership to be focused on just one part of the business—the world is too interconnected and moves too quickly. Which is why we’ve moved onto the era of technologists as CEOs. Every company is a technology company in today’s digital-first world. Industries are constantly being disrupted by the next big thing, which means businesses need modern CEOs that are equally comfortable managing the business as they are with technology. As so many organizations look to navigate digital transformation journeys, having a leader at the helm who understands the importance not just of having technology, but having the right technology, is critical. Technology is a strategic advantage for today’s organizations. Without a leader who can make those nuanced decisions, it’s impossible to create solutions that will be useful for customers. And customers must always be at the center of any CEO’s decisions. Rocket’s solutions touch the lives of so many every day – from withdrawing money from an ATM to swiping your credit card at a convenience store, our technology is critical to ensuring the lives of millions run smoothly.


IT spending trends point to CIO innovation

Forward thinking, meanwhile, will spark an increase in long-term contracts that accommodate three-to-five-year planning horizons. Inflation and the war for talent also encourage extended contract periods, Lovelock noted. Longer-term deals offer CIOs greater certainty regarding cost and the availability of technical skills, he said. The skill shortage will also generate demand for external service providers such as consultants and MSPs. The Gartner forecast shows IT services growing to 7.9% year over year in 2022, hitting $1.3 trillion. The market watcher expects IT services' spending growth to trail only enterprise software, which tops the Gartner forecast with a projected 11% year-over-year increase. Business and technology consulting services will emerge as one of the fastest-growing sectors in IT services, growing at a 10% clip in 2022, Lovelock said. Cloud adoption will help drive that spurt. Gartner research suggests the vast majority of large organizations will hire external consultants to devise cloud strategies over the next few years.


ICO criticises government-backed campaign to delay end-to-end encryption

The privacy watchdog said end-to-end encryption plays an important role in safeguarding privacy and online safety, protecting children from abusers, and is crucial for business services. The intervention follows the launch of a government-funded campaign this week that warns that social media companies are “blinding themselves” to child sexual abuse by introducing end-to-end encrypted messaging services. Stephen Bonner, the ICO’s executive director of innovation, said the discussion on end-to-end encryption had become too unbalanced, with too much focus on the costs, without weighing up the significant benefits it offers. “E2EE serves an important role both in safeguarding our privacy and online safety,” he said. “It strengthens children’s online safety by not allowing criminals and abusers to send them harmful content. “It is also crucial for businesses, enabling them to share information securely and fosters consumer confidence in digital services.”


Looking Beyond Biden's Binding Security Directive

What is truly alarming, however, is how far behind many public and private organizations are with their patch management procedures. We frequently find known vulnerabilities in our customers' business-critical applications that are several years old and still unpatched. This directive looks to change that, ensuring agencies and their third-party vendors develop plans to find and remediate these known vulnerabilities. Multiple studies demonstrate that detecting vulnerabilities and prioritizing the right patches quickly and efficiently are the largest challenges. By establishing a prioritized catalog of vulnerabilities, the directive seeks to give federal agencies a leg up. The onus on establishing a plan and process for remediation, however, still remains with the individual federal agencies. Nevertheless, we're glad to see the Biden administration take this critical step forward in improving the cybersecurity posture of the United States and, by extension, the companies that provide services to the federal government. 


After ransomware arrests, some dark web criminals are getting worried

There's a consensus among cybersecurity experts that many of the major ransomware operations work out of Russia, with the authorities willing to turn a blind eye towards attacks targeting the West. But following arrests throughout the region, some cyber criminals are wondering if the risk is worth it. "This is a big change. I have no desire to go to jail," wrote one forum member. "In fact, one thing is clear, those who expect that the state would protect them will be greatly disappointed," said another. There's even concern that administrators of the dark web communities – who would have details about their users – could be coerced into working for law enforcement following arrest. Such is the paranoia among some forum members and ransomware affiliates that they suggest moving operations to a different jurisdiction, although this is unlikely to be a realistic option for many. "Those that are seasoned in cybercrime understand that by moving outside of Russia, they'll be taking on an even greater risk of being arrested by international law enforcement agencies. These agencies that are keeping tabs on cyber criminals will be watching for such potential moves," Ziv Mador.


The internet runs on free open-source software. Who pays to fix it?

“Tech companies, enterprises, anyone writing software is dependent on open-source,” says Wysopal. “Now there is a recognition at the highest levels of government that this is a big risk.” Easterly and other experts say that tech companies need to improve transparency. Adopting a Software Bill of Materials, as mandated by a 2021 executive order on cybersecurity from President Joe Biden, would help both developers and users better understand what is actually vulnerable to hacking when software flaws are discovered. Valsorda, who has managed to turn his own open-source work into a high-profile career, says that formalizing and professionalizing the relationship between developers and the big companies using their work could help. He advocates turning open-source work from a hobbyist pursuit into a professional career path so that critical infrastructure isn’t dependent on the spare time of a developer who already has a full-time job. And he argues that companies should develop systems to pay the people who maintain open-source projects their fair market value.


The Prometheus traffic direction system is a major player in malware distribution

The goal of such traffic direction systems is to redirect legitimate web users to malware, phishing pages, tech support scams, or other malicious operations. This is achieved by placing malicious scripts on compromised websites that intercept traffic or through malicious advertisements that are served to users on legitimate websites through ad networks. The main benefit of a TDS is that it allows cybercriminals to define redirection rules from an administration panel based on the type of visitors hitting the system's web of malicious landing pages. On compromised websites, Prometheus achieves this through a simple PHP backdoor script that fingerprints visitors -- browser, OS, timezone, language settings -- and sends the information back to a command-and-control server from where it pulls redirect instructions defined by attackers. This means that different categories of visitors can be redirected to different campaigns depending on the target audience the different groups renting TDS services want to reach and victims can also end up seeing localized scams in their language. 


Google finds a nation-state level of attacks on iPhone

The company behind the software used in these attacks, NSO, reportedly uses a fake GIF trick to target a vulnerability in the CoreGraphics PDF parser. The files have a .gif extension, but they are not GIF image files. The name is solely designed to keep a user from getting worried. “The ImageIO library is used to guess the correct format of the source file and parse it, completely ignoring the file extension. Using this fake gif trick, more than 20 image codecs are suddenly part of the iMessage zero-click attack surface, including some very obscure and complex formats, remotely exposing probably hundreds of thousands of lines of code.” As Google noted, these attacks are difficult to thwart. Blocking all GIF images is unlikely to prove effective. First, these files aren’t actually GIFs. The simplest approach is to block anything using a GIF extension, but the bad guys will simply switch to a different innocuous-sounding extension. ... nother Google point: “JBIG2 doesn't have scripting capabilities, but when combined with a vulnerability, it does have the ability to emulate circuits of arbitrary logic gates operating on arbitrary memory.


FAQ: What's happening with 5G and airport safety?

The Federal Communications Commission (FCC) concluded in 2020 that studies warning of this danger did "not demonstrate that harmful interference would likely result under reasonable scenarios" or even "reasonably 'foreseeable' scenarios." Tom Wheeler, a visiting Brookings Institution fellow and former FCC head, said in a paper that he doesn't think there's a real technical problem. The long-term answer to this problem is to "improve the resilience of future radar altimeter designs to RF interference." In the meantime, Wheeler pointed out, "The FCC created a guard band between the 5G spectrum and the avionics spectrum in which 5G was forbidden. Boeing, in a filing with the FCC, had proposed just such a solution. The Boeing proposal was to prohibit 5G 'within the 4.1-4.2 GHz portion of the band.' The FCC agreed and then doubled the size of Boeing's proposed guard band to a 220 MHz interference buffer between the upper 5G usage at 3.98 GHz, and avionics usage at 4.2 GHz." That's all well and good, but the FAA and major US and international airlines aren't buying it.


Data Mesh Architecture Patterns

An Enterprise Data Mesh is composed of many components (lots more detail available here, here, and here). Data Products, the primary building block within a Data Mesh, contain operational, analytic, and/or engagement data which is synchronized across the organiation using an Enterprise’s Data Mesh. APIs are used to access data within a Data Product. To support federated governance, each Data Product contains an audit log that records data changes and a catalog of data it manages. An Enterprise’s Data Mesh has many Data Products. Data Products subscribe to each other’s data such that when one Data Product changes its data, this change is communicated to other Data Products using Change Data Capture and an Event Streaming Backbone. Lastly, an Enterprise Data Catalog — a synchronized aggregation of all Data Product catalogs and data changes– is used to make it easy for any user or developer to find, consume, and govern any data across the enterprise, while also providing the foundation for understanding data lineage across the enterprise.



Quote for the day:

"Leadership is the key to 99 percent of all successful efforts." -- Erskine Bowles