Daily Tech Digest - February 12, 2025


Quote for the day:

“If you don’t have a competitive advantage, don’t compete.” -- Jack Welch


Security Is Blocking AI Adoption: Is BYOC the Answer?

Enterprises face unique hurdles in adopting AI at scale. Sensitive data must remain within secure, controlled environments, avoiding public networks or shared infrastructures. Traditional SaaS models often fail to meet these stringent data sovereignty and compliance demands. Beyond this, organizations require granular control, comprehensive auditing and full transparency to trace every AI decision and data access. This ensures vendors cannot interact with sensitive data without explicit approval and documentation. These unmet needs create a significant gap, preventing regulated industries from deploying AI solutions while maintaining compliance and security. ... The concept of Bring Your Own Cloud (BYOC) isn’t new. It emerged as a middle ground between traditional SaaS and on-premises deployments, promising to combine the best of both worlds: the convenience of managed services with the control and security of on-premises infrastructure. However, its history in the industry has been marked by both successes and cautionary tales. Early BYOC implementations often failed to live up to their promises. Some vendors merely deployed their software into customer cloud accounts without proper architectural planning, resulting in what was essentially remotely managed on-premises environments. 


The Importance of Continuing Education in Data and Tech

Continuing education plays a vital role in workforce development and career advancement within the tech industries, where rapid technological advancements and evolving market demands necessitate a culture of lifelong learning. As businesses increasingly rely on sophisticated data analytics, artificial intelligence (AI), and cloud technologies, professionals in these fields must continuously update their skills to remain competitive. Continuing education offers a pathway for individuals to acquire new capabilities, adapt to emerging technologies, and gain proficiency in specialized areas that are in high demand. By engaging in ongoing learning opportunities, tech professionals can enhance their expertise, making them more valuable to their current employers and more attractive to potential future ones. ... Professional certifications and competency-based education have become significant avenues for career advancement in the data and tech field. As the landscape of technology rapidly evolves, organizations increasingly seek professionals who possess validated skills and up-to-date knowledge. Professional certifications serve as tangible proof of one’s expertise in specific areas such as data governance, analytics, cybersecurity, or cloud computing. These certifications, offered by leading industry bodies and tech companies, are designed to align with current industry standards and demands.


Agents, shadow AI and AI factories: Making sense of it all in 2025

“Agentic AI” promises “digital agents” that learn from us, and can perceive, reason problems out in multiple steps and then make autonomous decisions on our behalf. They can solve multilayered questions that require them to interact with many other agents, formulate answers and take actions. Consider forecasting agents in the supply chain predicting customer needs by engaging customer service agents, and then proactively adjusting warehouse stock by engaging inventory agents. Every knowledge worker will find themselves gaining these superhuman capabilities backed by a team of domain-specific task agent workers helping them tackle large complex jobs with less expended effort. ... However, the proliferation of generative, and soon agentic AI, presents a growing problem for IT teams. Maybe you’re familiar with “shadow IT,” where individual departments or users procure their own resources, without IT knowing. In today’s world we have “shadow AI,” and it’s hitting businesses on two fronts. ... Today’s enterprises create value through insights and answers driven by intelligence, setting them apart from their competitors. Just as past industrial revolutions transformed industries — think about steam, electricity, internet and later computer software — the age of AI heralds a new era where the production of intelligence is the core engine of every business. 


Is VMware really becoming the new mainframe?

“CIOs can start to unwind their dependence on VMware,” he says. “But they need to know it may not have any material reduction in their spend with Broadcom over multiple renewals. They’re going to have to get completely off Broadcom.” Still, Warrilow recommends that CIOs running VMware consider alternatives over the long term. They should also look for exit strategies for other market-dominant IT products they use, given that Broadcom has seen early success with VMware, he says. “The cautionary tale for CIOs is that this is just the beginning,” he says. “Every tech investment firm is going to be saying, ‘I want what Broadcom has with their share price.’  ... “The comparison works a bit, maybe from a stickiness perspective, because customers have built their applications and workload using virtualization technology on VMware,” he says. “When they have to do a mass refactoring of applications, it’s very, very hard.” But the analogy has its limitations because many users think of mainframes as a legacy technology, while VMware’s cloud-based products address future challenges, he adds. “The cloud is the future for running your AI workload,” Shenoy says. “Customers have trusted us for the last 20 to 25 years to run their business-critical applications, and the interesting part right now is we are seeing a lot of growth of these AI workloads and container workloads running on VMware.”


Deep Learning – a Necessity

It is essential in architecture that we realize that a skill set is not an arbitrary thing. It isn’t learn one skill and you are done. It also isn’t learn any skill from any background and you’re in. It is the application of all of the identified and necessary skills combined that makes a distinguished architect. It is also important to understand the purpose and context of mastery. Working in a startup is very different from working in a large corporation. Industry can change things significantly as well. Always remember that the profession’s purpose has to be paramount in the learning. For example, both doctors and lawyers have to deal with clients and need human interaction skills to be successful. Yet, the nature and implementation of these differ drastically. We will explore this point in a further article. However, do not underestimate the impact of changing the meaning of the profession while claiming similar skills. The current environment is rife with this kind of co-opting of the terminology and tools to alter the whole purpose of architecture fundamentally. ... In medicine and other professions, an individual studies and practices for 7+ years to become fully independent, and they never stop learning. This learning is tracked by both mentors and the profession. Because medicine is so essential to humans it is important that professionals are measured and constantly update and hone their competencies.


Crawl, then walk, before you run with AI agents, experts recommend

The best bet for percolating AI agents throughout the organization is to keep things as simple as possible. "Companies and employees that have already found ways to operationalize intelligent agents for simple tasks are best placed to exploit the next wave with agentic AI," said Benjamin Lee, professor of computer and information science at the University of Pennsylvania. "These employees would already be engaging generative AI for simple tasks and they would be manually breaking complex tasks into simpler tasks for the AI. Such employees would already be seeing productivity gains from using generative AI for these simple tasks." Rowan agreed that enterprises should adopt a crawl, walk, run approach: "Begin with a pilot program to explore the potential of multiagent systems in a controlled, measurable environment." "Most people say AI is at the toddler stage, whereas agentic AI is like a tween," said Ben Sapp, global practice lead of intelligence at Digital.ai. "It's functional and knows how to execute certain functions." Enterprises and their technology teams "should socialize the use of generative AI for simple tasks within their organizations," Lee continued. "They should have strategies for breaking complex tasks into simpler ones so that, when intelligent agents become a reality, the sources of productivity gains are transparent, easily understood, and trusted."


Growth of digital wallet use shaking up payment regulations and benefits delivery

Australian banks are calling on the government to pass legislation that accommodates payments with digital wallets within the country’s regulatory framework. A release from the Australian Banking Association (ABA) argues that with the country’s residents making $20 billion worth of payments across 500 million transactions each month with mobile wallets, all players within the payment ecosystem should be under the remit of the Reserve Bank of Australia. ... Digital wallets are by far the most popular method of making cross-border payments, according to a new report from Payments Cards & Mobile. The How Digital Wallets Are Transforming Cross-Border Transactions report shows digital wallets are chosen for international transactions by 42.1 percent. That makes them more people than the next two most popular methods, money transfer services (16.8 percent) and bank accounts (14.8 percent) combined. Transactions with digital wallets are much faster than wire transfers, are available to people who don’t possess bank accounts, and have lower fees than bank transfers, the report says. Interoperability remains a challenge, and regulations and infrastructure limitations could pose barriers to adoption, but the report authors only expect the dominance of digital wallets to increase in the years ahead.


My vision is to create a digital twin of our entire operations, from design and manufacturing to products and customers

We approach this transformation from three dimensions. First is empathy – truly understanding not just who our customers are, but their emotions. This is where the concept of creating a ‘digital twin’ of the customer comes in. Second is innovation – not just adopting new technologies but ensuring that our processes are lean, digitised, and seamless throughout the customer journey, from research to purchase, service, and brand loyalty. The goal is to provide a consistent and empathetic experience across all touchpoints.  ... The first challenge is identifying our customers. For example, if a distributor in one business also buys from another or if a consumer connects with one of our industrial projects, it’s hard to track. To address this, we launched a customer UID project, which has been in progress for months. It helps us identify customers across channels while keeping an eye on privacy and adhering to upcoming data protection regulations. The second part involves gathering all customer-related data in one place. Over the past three years, we unified all customer interactions into a single platform with a one CRM strategy, which was complex but essential. Now, with AI solutions like social listening combined with sentiment analysis, we can understand what our customers are saying about us and where we need to improve, both in India and globally. 


Will AI Chip Supply Dry Up and Turn Your Project Into a Costly Monster?

CIOs and other IT leaders face tremendous pressure to quickly develop GenAI strategies in the face of a potential supply shortage. With the cost of individual units, spending can easily reach into the multi-million-dollar range. But it wouldn’t be the first time companies have dealt with semiconductor shortages. During the COVID-19 pandemic, a spike in PC demand for remote work met with global shipping disruptions to create a chip drought that impacted everything from refrigerators to automobiles and PCs. “One thing we learned was the importance of supply chain resiliency, not being overly dependent on any one supplier and understanding what your alternatives are,” Hoecker says. “When we work with clients to make sure they have a more resilient supply chain, we consider a few things … One is making sure they rethink how much inventory do they want to keep for their most critical components so they can survive any potential shocks.” She adds, “Another is geographic resiliency, or understanding where your components come from and do you feel like you’re overly exposed to any one supplier or any one geography.” Nvidia’s GPUs, she notes, are harder to find alternatives for -- but other chips do have alternatives. “There are other places where you can dual-source or find more resiliency in your marketplace.”


WTF? Why the cybersecurity sector is overrun with acronyms

Imagine an organization is in the midst of a massive hack or security breach, and employees or clients are having to Google frantically to translate company emails, memos or crisis plans, slowing down the response. When these acronyms inevitably migrate into a cybersecurity company’s external marketing or communications efforts, they’re almost guaranteed to cause the general public to tune out news about issues and innovations that could have a far-reaching impact on how people live their lives and conduct their businesses. This is especially true as artificial intelligence (AI!) and machine learning (ML!) technologies expand and new acronyms emerge to keep pace with developments. Acronyms can also have unfortunate real-life connotations — point of sale, to name just one example. When shortened to POS, it can suggest something is… well, crappy. ... So, what’s behind the tendency to shorten terms to a jumble of often incomprehensible acronyms and abbreviations? “On the one hand, acronyms, abbreviations and jargon are used to achieve brevity, standardization and efficiency in communication, so if a profession is steeped in complex and technical language, it will likely be flowing with acronyms,” says Ian P. McCarthy, a professor of innovation and operations management at Simon Fraser University in Burnaby, British Columbia.

Daily Tech Digest - February 11, 2025


Quote for the day:

"Your worth consists in what you are and not in what you have." -- Thomas Edison


Protecting Your Software Supply Chain: Assessing the Risks Before Deployment

Given the vast number of third-party components used in modern IT, it's unrealistic to scrutinize every software package equally. Instead, security teams should prioritize their efforts based on business impact and attack surface exposure. High-privilege applications that frequently communicate with external services should undergo product security testing, while lower-risk applications can be assessed through automated or less resource-intensive methods. Whether done before deployment or as a retrospective analysis, a structured approach to PST ensures that organizations focus on securing the most critical assets first while maintaining overall system integrity. ... While Product Security Testing will never prevent a breach of a third party out of your control, it is necessary to allow organizations to make informed decisions about their defensive posture and response strategy. Many organizations follow a standard process of identifying a need, selecting a product, and deploying it without a deep security evaluation. This lack of scrutiny can leave them scrambling to determine the impact when a supply chain attack occurs. By incorporating PST into the decision-making process, security teams gain critical documentation, including dependency mapping, threat models, and specific mitigations tailored to the technology in use. 


Google’s latest genAI shift is a reminder to IT leaders — never trust vendor policy

Entities out there doing things you don’t like are always going to be able to get generative AI (genAI) services and tools from somebody. You think large terrorist cells can’t use their money to pay somebody to craft LLMs for them? Even the most powerful enterprises can’t stop it from happening. But, that may not be the point. Walmart, ExxonMobil, Amazon, Chase, Hilton, Pfizer and Toyota and the rest of those heavy-hitters merely want to pick and choose where their monies are spent. Big enterprises can’t stop AI from being used to do things they don’t like, but they can make sure none of it is being funded with their money. If they add a clause to every RFP that they will only work with model-makers that agree to not do X, Y, or Z, that will get a lot of attention. The contract would have to be realistic, though. It might say, for instance, “If the model-maker later chooses to accept payments for the above-described prohibited acts, they must reimburse all of the dollars we have already paid and must also give us 18 months notice so that we can replace the vendor with a company that will respect the terms of our contracts.” From the perspective of Google, along with Microsoft, OpenAI, IBM, AWS and others, the idea is to take enterprise dollars on top of government contracts. 


Is Fine-Tuning or Prompt Engineering the Right Approach for AI?

It’s not just about having access to GPUs — it’s about getting the most out of proprietary data with new tools that make fine-tuning easier. Here’s why fine-tuning is gaining traction:Better results with proprietary data: Fine-tuning allows businesses to train models on their own data, making the AI much more accurate and relevant to their specific tasks. This leads to better outcomes and real business value. Easier than ever before: Tools like Hugging Face’s Open Source libraries, PyTorch and TensorFlow, along with cloud services, have made fine-tuning more accessible. These frameworks simplify the process, even for teams without deep AI expertise. Improved infrastructure: The rising availability of powerful GPUs and cloud-based solutions has made it much easier to set up and run fine-tuning at scale. While fine-tuning opens the door to more customized AI, it does require careful planning and the right infrastructure to succeed. ... As enterprises accelerate their AI adoption, choosing between prompt engineering and fine-tuning will have a significant impact on their success. While prompt engineering provides a quick, cost-effective solution for general tasks, fine-tuning unlocks the full potential of AI, enabling superior performance on proprietary data.


Shifting left without slowing down

On the one hand, automation enabled by GenAI tools in software development is driving unprecedented developer productivity, further emphasizing the gap created by manual application security controls, like security reviews or threat modeling. But in parallel, recent advancements in code understanding enabled by these technologies, together with programmatic policy-as-code security policies, enable a giant leap in the value security automation can bring. ... The first step is recognizing security as a shared responsibility across the organization, not just a specialized function. Equipping teams with automated tools and clear processes helps integrate security into everyday workflows. Establishing measurable goals and metrics to track progress can also provide direction and accountability. Building cross-functional collaboration between security and development teams sets the foundation for long-term success. ... A common pitfall is treating security as an afterthought, leading to disruptions that strain teams and delay releases. Conversely, overburdening developers with security responsibilities without proper support can lead to frustration and neglect of critical tasks. Failure to adopt automation or align security goals with development objectives often results in inefficiency and poor outcomes. 


How To Approach API Security Amid Increasing Automated Attack Sophistication

We’ve now gone from ‘dumb’ attacks—for example, web-based attacks focused on extracting data from third parties and on a specific or single vulnerability—to ‘smart’ AI-driven attacks often involving picking an actual target, resulting in a more focused attack. Going after a particular organization, perhaps a large organization or even a nation-state, instead of looking for vulnerable people is a significant shift. The sophistication is increasing as attackers manipulate request payloads to trick the backend system into an action. ... Another element of API security is being aware of sensitive data. Personal Identifiable Information (PII) is moving through APIs constantly and is vulnerable to theft or data exfiltration. Organizations do not often pay attention to vulnerabilities. Still, they pay attention when the result is damage to their organization through leaked PII, stolen finances, or brand reputation. ... The security teams know the network systems and the infrastructure well but don't understand the application behaviors. The DevOps team tends to own the applications but doesn’t see anything in production. This split boundary in most organizations makes it ripe for exploitation. Many data exfiltration cases fall in this no man’s land since an authenticated user executes most incidents.


Top 5 ways attackers use generative AI to exploit your systems

Gen AI tools help criminals pull together different sources of data to enrich their campaigns — whether this is group social profiling, or targeted information gleaned from social media. “AI can be used to quickly learn what types of emails are being rejected or opened, and in turn modify its approach to increase phishing success rate,” Mindgard’s Garraghan explains. ... The traditionally difficult task of analyzing systems for vulnerabilities and developing exploits can be simplified through use of gen AI technologies. “Instead of a black hat hacker spending the time to probe and perform reconnaissance against a system perimeter, an AI agent can be tasked to do this automatically,” Mingard’s Garraghan says. ... “This sharp decrease strongly indicates that a major technological advancement — likely GenAI — is enabling threat actors to exploit vulnerabilities at unprecedented speeds,” ReliaQuest writes. ... Check Point Research explains: “While ChatGPT has invested substantially in anti-abuse provisions over the last two years, these newer models appear to offer little resistance to misuse, thereby attracting a surge of interest from different levels of attackers, especially the low skilled ones — individuals who exploit existing scripts or tools without a deep understanding of the underlying technology.”


Why firewalls and VPNs give you a false sense of security

VPNs and firewalls play a crucial role in extending networks, but they also come with risks. By connecting more users, devices, locations, and clouds, they inadvertently expand the attack surface with public IP addresses. This expansion allows users to work remotely from anywhere with an internet connection, further stretching the network’s reach. Moreover, the rise of IoT devices has led to a surge in Wi-Fi access points within this extended network. Even seemingly innocuous devices like Wi-Fi-connected espresso machines, meant for a quick post-lunch pick-me-up, contribute to the proliferation of new attack vectors that cybercriminals can exploit. ... More doesn’t mean better when it comes to firewalls and VPNs. Expanding a perimeter-based security architecture rooted in firewalls and VPNs means more deployments, more overhead costs, and more time wasted for IT teams – but less security and less peace of mind. Pain also comes in the form of degraded user experience and satisfaction with VPN technology for the entire organization due to backhauling traffic. Other challenges like the cost and complexity of patch management, security updates, software upgrades, and constantly refreshing aging equipment as an organization grows are enough to exhaust even the largest and most efficient IT teams.


Building Trust in AI: Security and Risks in Highly Regulated Industries

AI hallucinations have emerged as a critical problem, with systems generating plausible but incorrect information - for instance, AI fabricated software dependencies, such as PyTorture, leading to potential security risks. Hackers could exploit these hallucinations by creating malicious components masquerading as real ones. In another case, an AI libelously fabricated an embezzlement claim, resulting in legal action - marking the first time AI was sued for libel. Security remains a pressing concern, particularly with plugins and software supply chains. A ChatGPT plugin once exposed sensitive data due to a flaw in its OAuth mechanism, and incidents like PyTorch’s vulnerable release over Christmas demonstrate the risks of system exploitation. Supply chain vulnerabilities affect all technologies, while AI-specific threats like prompt injection allow attackers to manipulate outputs or access sensitive prompts, as seen in Google Gemini. ... Organizations can enhance their security strategies by utilizing frameworks like Google’s Secure AI Framework (SAIF). These frameworks highlight security principles, including access control, detection and response systems, defense mechanisms, and risk-aware processes tailored to meet specific business needs.


When LLMs become influencers

Our ability to influence LLMs is seriously circumscribed. Perhaps if you’re the owner of the LLM and associated tool, you can exert outsized influence on its output. For example, AWS should be able to train Amazon Q to answer questions, etc., related to AWS services. There’s an open question as to whether Q would be “biased” toward AWS services, but that’s almost a secondary concern. Maybe it steers a developer toward Amazon ElastiCache and away from Redis, simply by virtue of having more and better documentation and information to offer a developer. The primary concern is ensuring these tools have enough good training data so they don’t lead developers astray. ... Well, one option is simply to publish benchmarks. The LLM vendors will ultimately have to improve their output or developers will turn to other tools that consistently yield better results. If you’re an open source project, commercial vendor, or someone else that increasingly relies on LLMs as knowledge intermediaries, you should regularly publish results that showcase those LLMs that do well and those that don’t. Benchmarking can help move the industry forward. By extension, if you’re a developer who increasingly relies on coding assistants like GitHub Copilot or Amazon Q, be vocal about your experiences, both positive and negative. 


Deepfakes: How Deep Can They Go?

Metaphorically, spotting deepfakes is like playing the world’s most challenging game of “spot the difference.” The fakes have become so sophisticated that the inconsistencies are often nearly invisible, especially to the untrained eye. It requires constant vigilance and the ability to question the authenticity of audiovisual content, even when it looks or sounds completely convincing. Recognizing threats and taking decisive actions are crucial for mitigating the effects of an attack. Establishing well-defined policies, reporting channels, and response workflows in advance is imperative. Think of it like a citywide defense system responding to incoming missiles. Early warning radars (monitoring) are necessary to detect the threat; anti-missile batteries (AI scanning) are needed to neutralize it; and emergency services (incident response) are essential to quickly handle any impacts. Each layer works in concert to mitigate harm. ... If a deepfake attack succeeds, organizations should immediately notify stakeholders of the fake content, issue corrective statements, and coordinate efforts to remove the offending content. They should also investigate the source, implement additional verification measures, and provide updates to rebuild trust and consider legal action. 


Daily Tech Digest - February 10, 2025


Quote for the day:

"If it wasn't hard, everyone would do it, the hard is what makes it great." -- Tom Hanks


Privacy Puzzle: Are Businesses Ready for the DPDP Act?

The State of Data Privacy in India 2024 report shows mixed responses. While 56% of businesses think the DPDP Act addresses key privacy issues, 30% are unsure and 14% remain skeptical. Even more troubling, more than 82% of companies lack transparency in handling data, raising serious trust concerns. ... smaller businesses, such as micro, small and medium enterprises, or MSMEs, and startups, often struggle due to limited resources. Many rely on IT or legal teams to oversee privacy initiatives, with some lacking any formal governance structures. This fragmented approach poses significant risks, especially as these organizations are equally subject to regulatory scrutiny under the DPDP Act. ... Third-party risk is another critical concern. Many enterprises depend on vendors for essential services, yet only 38% use a combination of risk assessments and contractual obligations to manage third-party privacy risks. Eight percent of organizations lack any significant measures, leaving them exposed to potential data leaks and regulatory penalties. ... Despite progress made in privacy staffing and strategy alignment, privacy professionals are experiencing increased stress within a complex compliance and risk landscape, according to new research from ISACA.


CISOs: Stop trying to do the lawyer’s job

“It’s good to be mindful in advance of the security and privacy requirements in the jurisdictions the organization is operating within, and to prepare possible responses should there be incidents that violate those laws and how to respond to those,” says Christine Bejerasco, CISO at WithSecure. Of course, the conversation between the two parties can go smoothly if there’s an existing relationship. If not, that relationship should be built. “Reaching out to legal experts should be as straightforward as reaching out to another colleague,” Bejerasco adds. “Just talk to them directly.” ... Some CISO have a legal background of have an extensive amount of experience working with general counsel. However, this does not mean they should act as legal advisors or take on responsibilities outside their role. “It is important to respect boundaries and not overstep job functions,” says Stacey Cameron, CISO at Halcyon. “There’s nothing wrong with differing opinions, interpretations, or healthy discussions, but for legal matters, it will be the lawyers’ responsibility to make a case on behalf of the company, so we need to respect each other’s roles and stay in our respective lanes.” According to Cameron, overstepping boundaries is one of the biggest mistakes CISOs can make, when they are trying to build a relationship with their organizations’s lawyers. 


Inside Monday’s AI pivot: Building digital workforces through modular AI

The initial deployment of gen AI at Monday didn’t quite generate the return on investment users wanted, however. That realization led to a bit of a rethink and pivot as the company looked to give its users AI-powered tools that actually help to improve enterprise workflows. That pivot has now manifested itself with the company’s “AI blocks” technology and the preview of its agentic AI technology that it calls “digital workforce.” Monday’s AI journey, for the most part, is all about realizing the company’s founding vision. “We wanted to do two things, one is give people the power we had as developers,” Mann told VentureBeat in an exclusive interview. “So they can build whatever they want, and they feel the power that we feel, and the other end is to build something they really love.” ... Simply put, AI functionality needs to be in the right context for users — directly in a column, component or service automation. AI blocks are pre-built AI functions that Monday has made accessible and integrated directly into its workflow and automation tools. For example, in project management, the AI can provide risk mapping and predictability analysis, helping users better manage their projects. 


Courting Global Talent: How can Web3 Startups Attract the Best Developers in the World?

Any company without concrete values guiding its recruitment will often hire quickly and in the end obtain regrettable results. Web3 projects are no exception. Fortunately, there are a number of pre-established values in Web3 that can help offset this tendency: community, inclusivity, sustainability, and collaboration. These beliefs should be the guiding frameworks behind any Web3 startup's hiring policy, enabling them to assess candidates with a clear understanding of whether the applicant's character aligns with the company's DNA. High-performing people are needed in Web3 who can not only bring their own unique experiences to an organisation, but whose broader values very much align with the company's guiding principles. The focus of any hiring strategy should never be quantity over quality, as this will almost always result in disappointment and wasted time. Hiring people who are the right fit - measured by how well the candidate exemplifies the company's overarching values - should be non-negotiable. Likewise, transparency, another of Web3's core tenets, should be baked into every step of the hiring funnel, and it comes in two modes. Firstly, Web3 companies should be aware of their unique value proposition and amplify this in their external marketing efforts.


Is DOGE a cybersecurity threat? A security expert explains the dangers of violating protocols and regulations

Traditionally, the purpose of cybersecurity is to ensure the confidentiality and integrity of information and information systems while helping keep those systems available to those who need them. But in DOGE's first few weeks of existence, reports indicate that its staff appears to be ignoring those principles and potentially making the federal government more vulnerable to cyber incidents. ... Currently, the general public, federal agencies and Congress have little idea who is tinkering with the government's critical systems. DOGE's hiring process, including how it screens applicants for technical, operational or cybersecurity competency, as well as experience in government, is opaque. And journalists investigating the backgrounds of DOGE employees have been intimidated by the acting U.S. attorney in Washington. DOGE has hired young people fresh out of—or still in—college or with little or no experience in government, but who reportedly have strong technical prowess. But some have questionable backgrounds for such sensitive work. And one leading DOGE staffer working at the Treasury Department has since resigned over a series of racist social media posts. ... DOGE operatives are quickly developing and deploying major software changes to very complex old systems and databases, according to reports. 


Australian businesses urged to help shape new data security framework

With the consultation process entering its final stages, businesses are encouraged to take part in upcoming workshops or submit feedback online. Workshops will take place in Sydney on Tuesday 18 February, Brisbane on Wednesday 19 February, and Melbourne on Wednesday 26 February. For those unable to attend, an online survey is available for businesses to provide their insights. Key emphasised the significance of business participation in shaping the framework. "This is the last chance to get involved in the industry consultation," he said. "Workshops are taking place this month, but if people can't attend, we'd love them to complete the survey online." The workshops will be interactive, allowing participants to share their experiences with data security, discuss their existing frameworks, and provide recommendations. ... Without meaningful industry engagement, the framework risks being ineffective or underutilised. Key warned that failing to gather input from businesses could lead to a framework that does not meet their needs. "We essentially would be creating an industry framework that industry may or may not actually utilise," he said. "This is really designed for industry, and we need that kind of input from industry for it to work for them."


Can AI Early Warning Systems Reboot the Threat Intel Industry?

AI platforms learn how multiple campaigns connect, which malicious tools get repeated, and how often threat actors pivot to new malicious infrastructure and domains. That kind of cross-campaign insight is gold for defenders, especially when the data is available in real time. Of course, adversaries won’t line up to feed their best secrets to OpenAI, Microsoft or Google AI platforms. Some hacker groups prefer open-source models, hosting them on private servers where there’s zero chance of being monitored. As these open-source models gain sophistication, criminals can test or refine their attacks without Big Tech breathing down their necks but the lure of advanced online models with powerful capabilities will be hard to avoid. Even as security experts remain bullish on the power of AI to save threat intel, there are adversarial concerns at play. Some warn that attackers can poison AI systems, manipulate data to produce false negatives, or exploit generative models for their own malicious scripts. But as it stands, the big AI platforms already see more malicious signals in a day than any single cybersecurity vendor sees in a year. That scale is exactly what’s been missing from threat intelligence. For all the talk about “community sharing” and open exchanges, it’s always been a tangled mess. 


Security validation: The new standard for cyber resilience

Stolen credentials are a goldmine for attackers. According to Verizon’s 2024 Data Breach Investigations Report (DBIR), compromised credentials account for 31% of breaches over the past decade and 77% of web application attacks. The Colonial Pipeline attack in 2021 is a stark reminder of the damage that can result from leaked credentials—attackers gained access to the company’s VPN using credentials found on the dark web. Security validation makes it easy to test for credential-related risks. ... One of the most significant benefits of security validation is its ability to provide evidence-based guidance for remediation. Rather than adopting a “patch everything” approach, teams can focus on the most critical fixes based on real exploitability risk and system impact. ... Traditional security metrics, such as the number of vulnerabilities patched or the percentage of endpoints with antivirus software, only tell part of the story. Security validation offers a fresh perspective by measuring your posture based on emulated attacks. This shift from reactive to proactive security management is essential in today’s ever-changing threat landscape. By safely emulating real-world attacks in live environments, security validation ensures that your controls can detect, block, and respond to threats before damage occurs.


Cyber insurance is no silver bullet for cybersecurity

Cyber insurance is designed to minimise organisations’ financial losses from cyber incidents by covering costs like breach notification, data restoration, legal fees, and even ransomware payments. Insurers evaluate an organisation’s security posture by assessing the implementation of specific security controls. ... Despite its potential, research reveals that cyber insurance falls short in improving security practices. A report by the Royal United Services Institute (RUSI) think tank points out that cyber insurance policies often lack standardisation and fail to incentivise organisations to adopt security practices aligned with frameworks like ISO 27001 or NIST CSF. Another study emphasises that insurance requirements may be motivated by various other factors (eg, controls that reduce very specific risks, length of policy period, liable risks) rather than improving overall organisational security in a meaningful way. Not only does this gap weaken the argument for cyber insurance improving security, it also poses a risk for businesses. Organisations meeting insurance requirements (which may be minimal in terms of security) may mistakenly believe they are well-protected, only to find themselves vulnerable to attacks that exploit overlooked weaknesses.


The Metamorphosis of Open Source: An Industry in Transition

The rise of artificial intelligence has introduced a new topic to the open source conversation. Unlike traditional software, AI systems include both code and models, data, and training methods, creating complexities that existing open source licenses were not designed to address. Recognizing this gap, the OSI launched the Open Source AI Definition (OSAID) in 2024, marking a pivotal moment in the evolution of open source principles. OSAID v1.0 defines the essential freedoms for AI systems: the rights to use, study, modify, and share AI technologies without restriction. This framework aims to ensure that AI systems labeled as “open source” align with the core values of transparency and collaboration underpinning the movement. However, the journey has not been without challenges. The OSI’s definition has sparked debates, particularly around the legal ambiguities of model weights and data licensing. For instance, while OSAID emphasizes transparency in data sources and methodologies, it does not resolve whether model weights derived from unlicensed data can be freely shared or used commercially. This has left businesses and developers navigating a gray area, where the practical adoption of open source AI models requires careful legal scrutiny.

Daily Tech Digest - February 09, 2025


Quote for the day:

“Be patient with yourself. Self-growth is tender; it’s holy ground. There’s no greater investment.” -- Stephen Covey


Quantum Artificial Intelligence

Classical AI faces limitations related to computational efficiency, data processing capabilities, and pattern recognition in highly complex systems. Quantum computing, leveraging superposition and entanglement, offers promising solutions to overcome these challenges. ... Deep learning models form the backbone of modern AI, but training them requires enormous computing power and time. Quantum Deep Learning (QDL) introduces quantum-based algorithms, such as Grover’s Algorithm and Shor’s Algorithm, which can significantly accelerate deep learning processes, allowing for more sophisticated and efficient AI models. ... Traditional AI systems rely on sequential or limited parallel processing. However, quantum computers can process multiple possibilities simultaneously due to quantum superposition, enabling AI models to analyze vast amounts of data exponentially faster than classical systems. ... Physicist Roger Penrose and neuroscientist Stuart Hameroff proposed the “Orch-OR” (Orchestrated Objective Reduction) theory, suggesting that human consciousness arises from quantum processes within microtubules in brain neurons.If true, this raises the possibility that an AI system powered by quantum computing could simulate or even replicate aspects of human consciousness.


Life After VMware: Which Alternative Is Right For You?

Despite an unhappy VMware customer base, Broadcom is thriving. In its most recent earnings, the company posted record revenues of $51.6 billion, with $2.7 billion coming from software sales. Broadcom is betting that, despite rising costs, enterprises will still choose VMware over competing solutions. However, that gamble is far from certain, with mounting competition from alternative hypervisors, open-source platforms, and public-cloud specific solutions. ... However, moving away from VMware is no simple task. Enterprises must weigh migration complexity, integration challenges, and the long-term viability of their chosen alternative. The decision isn’t just about cost savings — it’s about aligning IT strategy with the future of hybrid cloud, containerization, and AI-driven workloads. ... This shift is already creating winners. Nutanix, Microsoft Hyper-V, Azure Stack HCI, and Red Hat OpenShift Virtualization are emerging as viable competitors. Each of these offer distinct advantages based on business needs and strategic direction, with Nutanix leading the pack. The time to act is now. Enterprises that proactively navigate this transition will mitigate the uncertainties of VMware's new ownership and position themselves for long-term success. 


AI Agents Are Now Trading IP Rights With Each Other—And Earning Crypto for Their Owners

Since Story Protocol functions as an IP market, everything revolves around that idea, and the mechanics are straightforward. I agents register their work on Story's blockchain, and then other agents purchase those assets using crypto. The system handles licensing, rights management, and revenue distribution automatically through smart contracts. Humans can use the system instead of agents, but that’s not nearly as cool. In fact, some agents are already negotiating the IP with other agents—not just humans. “There's a lot of agentic commerce happening on Story because Story is a permissionless, programmable IP system," Lee said. ... Lee described a system where AI-generated content based on Goyer's universe would automatically split revenue between the AI creator and the original IP holder. This model ensures creators are compensated when AI builds on their work. He emphasized that the universe is entirely original, with all characters, ships, and storylines registered on Story. Users can expand on those elements, create side stories, contribute to the canon, and share in the financial benefits. This approach, he said, represents a new way for AI to collaborate with creators, extending and monetizing their work while distributing the rewards. ... Story’s value proposition has also been interesting enough to attract other significant AI projects.


Finally, I Found The Best AI IDE!

Let's be honest. Traditional coding can be... tedious. We spend countless hours wrestling with syntax, debugging obscure errors, and searching Stack Overflow for that one line of code that'll fix everything. ... But the reality, until now, has often fallen short. Many "AI" tools felt like glorified autocomplete, offering suggestions that were more distracting than helpful. Others were locked behind hefty paywalls, making them inaccessible to many developers. ... After extensive testing, my personal winning combination is Aide + Theia.Aide for day-to-day coding. The AI pair-programming features are simply unmatched for productivity. And the fact that it's fully open-source and free is the icing on the cake. Theia IDE for larger projects, collaborative work, or when I need the flexibility of a cloud-based environment. Its compatibility with VS Code extensions and LSP makes it a future-proof choice. Why not Windsurf or Cursor? While Windsurf offers a compelling free tier, its closed-source nature is a dealbreaker. Cursor is fantastic, but the price tag puts it out of reach for many developers. ... The world of AI-powered IDEs is evolving at lightning speed. But for me, the combination of Aide and Theia represents the sweet spot: powerful, flexible, and accessible to everyone. 


Rewiring maintenance with gen AI

As the problems pile up, forward-thinking maintenance functions are searching for new ways to address cost, productivity, and skills challenges. Gen AI is emerging as a transformative solution for these challenges. Gen AI tools use advanced machine learning models to accelerate data analysis, predict potential failures, automate routine tasks, and retain critical knowledge.  ... Armed with the gen AI tool, frontline maintenance teams are now evolving their maintenance strategies, adopting best practices from across the organization. The system continuously updates its library of recommended strategies based on the effectiveness of maintenance interventions elsewhere, helping the organization collaboratively improve overall maintenance performance. Since implementing the gen AI FMEA tool, the company has seen a significant reduction in equipment downtime. Employee capacity has also increased because less time is spent manually creating FMEAs and related work orders. ... Realizing the full potential of gen AI in maintenance is challenging for several reasons. These technologies are novel, requiring maintenance organizations to understand new technologies and avoid unfamiliar pitfalls. And gen AI is advancing extremely rapidly, requiring an agile approach to use-case selection, tool development, and continuous evolution.


Chain-of-Associated-Thoughts (CoAT): An AI Framework to Enhance LLM Reasoning

Unlike static RAG approaches that retrieve information upfront, CoAT activates knowledge retrieval in response to specific reasoning steps—equivalent to a mathematician recalling relevant theorems only when needed in a proof. Second, an optimized MCTS algorithm incorporates this associative process through a novel four-stage cycle: selection, expansion with knowledge association, quality evaluation, and value backpropagation. This creates a feedback loop where each reasoning step can trigger targeted knowledge updates, as shown in Figure 4 of the original implementation. ... For retrieval-augmented generation (RAG) tasks, CoAT was compared against NativeRAG, IRCoT, HippoRAG, LATS, and KAG on the HotpotQA and 2WikiMultiHopQA datasets. Metrics such as Exact Match (EM) and F1 scores confirmed CoAT’s superior performance, demonstrating its ability to generate precise and contextually relevant answers. In code generation, CoAT-enhanced models outperformed fine-tuned counterparts (Qwen2.5-Coder-7B-Instruct, Qwen2.5-Coder-14B-Instruct) on datasets like HumanEval, MBPP, and HumanEval-X, underscoring its adaptability to domain-specific reasoning tasks. This work establishes a new paradigm for LLM reasoning by integrating dynamic knowledge association with structured search. 


Begin with problems, sandbox, identify trustworth vendors — a quick guide to getting started with AI

The most valuable testing uses a framework connecting to crucial key performance indicators (KPIs). According to Google Cloud: “KPIs are essential in gen AI deployments for a number of reasons: Objectively assessing performance, aligning with business goals, enabling data-driven adjustments, enhancing adaptability, facilitating clear stakeholder communication and demonstrating the AI project’s ROI. They are critical for measuring success and guiding improvements in AI initiatives.” In other words, your testing framework could be based on accuracy, coverage, risk or whichever KPI is most important to you. You just need to have clear KPIs. Once you do, gather five to 15 people to perform the testing. Two teams of seven people are ideal for this. As those experienced individuals begin testing those tools, you will be able to gather enough input to determine whether this system is worth scaling. Leaders often ask what they should do if a vendor isn’t willing to do a pilot program with them. This is a valid question, but the answer is simple. If you find yourself in this situation, do not engage further with the company. Any worthy vendor will consider it an honor to create a pilot program for you. ... 


Meta has an AI for brain typing, but it’s stuck in the lab

Facebook’s original quest for a consumer brain-reading cap or headband ran into technical obstacles, and after four years, the company scrapped the idea. But Meta never stopped supporting basic research on neuroscience, something it now sees as an important pathway to more powerful AIs that learn and reason like humans. King says his group, based in Paris, is specifically tasked with figuring out “the principles of intelligence” from the human brain. “Trying to understand the precise architecture or principles of the human brain could be a way to inform the development of machine intelligence," says King. “That’s the path.” The typing system is definitely not a commercial product, nor is it on the way to becoming one. The magnetoencephalography scanner used in the new research collects magnetic signals produced in the cortex as brain neurons fire. But it is large and expensive and needs to be operated in a shielded room, since Earth’s magnetic field is a trillion times stronger than the one in your brain. Norman likens the device to “an MRI machine tipped on its side and suspended above the user’s head.” What’s more, says King, the second a subject’s head moves, the signal is lost. “Our effort is not at all toward products,” he says. 


Enterprise Architecture: How AI and Distributed Systems are Transforming Business

Predictive scaling represents the next frontier in enterprise architecture. By analyzing patterns across historical usage, seasonal variations and user behavior, modern systems can anticipate resource needs before demand spikes occur. This proactive approach marks a significant departure from traditional reactive scaling methods, dramatically improving both performance and cost efficiency. The implementation of AI in enterprise systems demands careful consideration of broader organizational goals. Technical teams must build robust data pipelines while maintaining clear communication channels across departments. System architecture should accommodate current needs while remaining adaptable enough to incorporate emerging technologies and methodologies. Predictive scaling is revolutionizing enterprise architecture by enabling systems to anticipate resource needs before demand spikes occur. At Cisco, we implemented predictive scaling in IoT networks managing millions of connected devices. Machine learning algorithms analyzed patterns in device usage and system load, dynamically adjusting server capacity to ensure seamless operations. This 


Building a Culture of Cyber Resiliency with AI

It makes sense that the top concern for cybersecurity leaders is vulnerabilities associated with unpatched software and systems in their current tech stack (54%). Close behind are concerns around vulnerabilities brought on by misconfiguration (48%), and end-of-life systems (43%). Despite recognizing the need to address these exposures, nearly half of organizations surveyed scan for vulnerabilities only once a week, or less frequently, signaling a lack of adequate resources to identify and address potential threats in a timely manner. The Verizon DBIR suggests that organizations took almost two months to patch and remediate 50% of critical vulnerabilities, while these same vulnerabilities became mass-exploitable in five days. This makes it a perilous situation for enterprises. To top it all, threat actors and their methods, powered by AI, are becoming increasingly difficult to detect and prevent. Recent data found that 95% of IT leaders believe that cyber-attacks are more sophisticated than ever before, with AI-powered attacks being the most serious emerging threat. Over 80% of those respondents agreed that scams like phishing have become more difficult to detect with the rise in actors using AI maliciously. 

Daily Tech Digest - February 08, 2025


Quote for the day:

“There is no failure except in no longer trying.” -- Chris Bradford


Google's DMARC Push Pays Off, but Email Security Challenges Remain

Large email senders are not the only groups quickening the pace of DMARC adoption. The latest Payment Card Industry Data Security Standard (PCI DSS) version 4.0 requires DMARC for all organizations that handle credit card information, while the European Union's Digital Operational Resilience Act (DORA) makes DMARC a necessity for its ability to report on and block email impersonation, Red Sift's Costigan says. "Mandatory regulations and legislation often serve as the tipping point for most organizations," he says. "Failures to do reasonable, proactive cybersecurity — of which email security and DMARC is obviously a part — are likely to meet with costly regulatory actions and the prospect of class action lawsuits." Overall, the authentication specification is working as intended, which explains its arguably rapid adoption, says Roger Grimes, a data-driven-defense evangelist at security awareness and training firm KnowBe4. Other cybersecurity standards, such as DNSSEC and IPSEC, have been around longer, but DMARC adoption has outpaced them, he maintains. "DMARC stands alone as the singular success as the most widely implemented cybersecurity standard introduced in the last decade," Grimes says.


Can Your Security Measures Be Turned Against You?

Over-reliance on certain security products might also allow attackers to extend their reach across various organizations. For example, the recent failure of CrowdStrike’s endpoint detection and response (EDR) tool, which caused widespread global outages, highlights the risks associated with depending too heavily on a single security solution. Although this incident wasn’t the result of a cyber attack, it clearly demonstrates the potential issues that can arise from such reliance. For years, the cybersecurity community has been aware of the risks posed by vulnerabilities in security products. A notable example from 2015 involved a critical flaw in FireEye’s email protection system, which allowed attackers to execute arbitrary commands and potentially take full control of the device. More recently, a vulnerability in Proofpoint’s email security service was exploited in a phishing campaign that impersonated major corporations like IBM and Disney. Windows SmartScreen is designed to shield users from malicious software, phishing attacks, and other online threats. Initially launched with Internet Explorer, SmartScreen has been a core part of Windows since version 8. 


Why Zero Trust Will See Alert Volumes Rocket

As the complexity of zero trust environments grows, so does the need for tools to handle the data explosion. Hypergraphs and generative AI are emerging as game-changers, enabling SOC teams to connect disparate events and uncover hidden patterns. Telemetry collected in zero trust environments is a treasure trove for analytics. Every interaction, whether permitted or denied, is logged, providing the raw material for identifying anomalies. The cybersecurity industry have set standards for exchanging and documenting threat intelligence. By leveraging structured frameworks like MITRE ATT&CK, MITRE DEFEND, and OCSF, activities can be enriched with contextual information enabling better detection and decision-making. Hypergraphs go beyond traditional graphs by representing relationships between multiple events or entities. They can correlate disparate events. For example, a scheduled task combined with denied AnyDesk traffic and browsing to MegaUpload might initially seem unrelated. However, hypergraphs can connect these dots, revealing the signature of a ransomware attack like Akira. By analysing historical patterns, hypergraphs can also predict attack patterns, allowing SOC teams to anticipate the next steps of an attacker and defend proactively.


Capable Protection: Enhancing Cloud-Native Security

Much like in a game of chess, anticipating your opponent’s moves and strategizing accordingly is key to security. Understanding the value and potential risks associated with NHIs and Secrets is the first step towards securing your digital environment. Remediation prioritization plays a crucial role in managing NHIs. The identification and classification process of NHIs enables businesses to react promptly and adequately to any potential vulnerabilities. Furthermore, awareness and education are fundamental to minimize human-induced breaches. ... Cybersecurity must adapt. The traditional, human-centric approach to cybersecurity is inadequate. Integrating an NHI management strategy into your cybersecurity plan is therefore a strategic move. Not only does it enhance an organization’s security posture, but it also facilitates regulatory compliance. Coupled with the potential for substantial cost savings, it’s clear that NHI management is an investment with significant returns. For many organizations, the challenge today lies in striking a balance between speed and security. Rapid deployment of applications and digital services is essential for maintaining competitive advantage, yet this can often be at odds with the need for adequate cybersecurity. 


Attackers Exploit Cryptographic Keys for Malware Deployment

Microsoft recommends developers avoid using machine keys copied from public sources and rotate keys regularly to mitigate risks. The company also removed key samples from its documentation and provided a script for security teams to identify and replace publicly disclosed keys in their environments. Microsoft Defender for Endpoint also includes an alert for publicly exposed ASP.NET machine keys, though the alert itself does not indicate an active attack. Organizations running ASP.NET applications, especially those deployed in web farms, are urged to replace fixed machine keys with auto-generated values stored in the system registry. If a web-facing server has been compromised, rotating the machine keys alone may not eliminate persistent threats. Microsoft said recommends conducting a full forensic investigation to detect potential backdoors or unauthorized access points. In high-risk cases, security teams should consider reformatting and reinstalling affected systems to prevent further exploitation, the report said. Organizations should also implement best practices such as encrypting sensitive configuration files, following secure DevOps procedures and upgrading applications to ASP.NET 4.8. 


The race to AI in 2025: How businesses can harness connectivity to pick up pace

When it comes to optimizing cloud workloads and migrating to available data centers, connectivity is the “make or break” technology. This is why Internet Exchanges (IXs) – physical platforms where multiple networks interconnect to exchange traffic directly with one another via peering – have become indispensable. An IX allows businesses to bypass the public Internet and find the shortest and fastest network pathways for their data, dramatically improving performance and reducing latency for all participants. Importantly, smart use of an IX facility will enable businesses to connect seamlessly to data centers outside of their “home” region, removing geography as a barrier and easing the burden on data center hubs. This form of connectivity is becoming increasingly popular, with the number of IXs in the US surging by more than 350 percent in the past decade. The use of IXs itself is nothing new, but what is relatively new is the neutral model they now employ. A neutral IX isn’t tied to a specific carrier or data center, which means businesses have more connectivity options open to them, increasing redundancy and enhancing resilience. Our own research in 2024 revealed that more than 80 percent of IXs in the US are now data center and carrier-neutral, making it the dominant interconnection model.


The hidden threat of neglected cloud infrastructure

Left unattended for over a decade, malicious actors could have reregistered this bucket to deliver malware or launch devastating supply chain attacks. Fortunately, researchers notified CISA, which promptly secured the vulnerable resource. The incident illustrates how even organizations dedicated to cybersecurity can fall prey to the dangers of neglected digital infrastructure.This story is not an anomaly. It indicates a systemic issue that spans industries, governments, and corporations. ... Entities attempting to communicate with these abandoned assets include government organizations (such as NASA and state agencies in the United States), military networks, Fortune 100 companies, major banks, and universities. The fact that these large organizations were still relying on mismanaged or forgotten resources is a testament to the pervasive nature of this oversight. The researchers emphasized that this issue isn’t specific to AWS, the organizations responsible for these resources, or even a single industry. It reflects a broader systemic failure to manage digital assets effectively in the cloud computing age. The researchers noted the ease of acquiring internet infrastructure—an S3 bucket, a domain name, or an IP address—and a corresponding failure to institute strong governance and life-cycle management for these resources.  


DevOps Evolution: From Movement to Platform Engineering in the AI Era

After nearly 20 years of DevOps, Grabner sees an opportunity to address historical confusion while preserving core principles. “We want to solve the same problem – reduce friction while improving developer and operational efficiency. We want to automate, monitor, and share.” Platform engineering represents this evolution, enabling organizations to scale DevOps best practices through self-service capabilities. “Platform engineering allows us to scale DevOps best practices in an enterprise organization,” Grabner explains. “What platform engineering does is provide self-services to engineers so they can do everything we wanted DevOps to do for us.” At Dynatrace Perform 2025, the company announced several innovations supporting this evolution. The enhanced Davis AI engine now enables preventive operations, moving beyond reactive monitoring to predict and prevent incidents before they occur. This includes AI-powered generation of artifacts for automated remediation workflows and natural language explanations with contextual recommendations. The evolution is particularly evident in how observability is implemented. “Traditionally, observability was always an afterthought,” Grabner explains. 


Bridging the IT Gap: Preparing for a Networking Workforce Evolution

People coming out of university today are far more likely to be experienced in Amazon Web Services (AWS) and Azure than in Border Gateway Protocol (BGP) and Ethernet virtual private network (EVPN). They have spent more time with Kubernetes than with a router or switch command line. Sure, when pressed into action and supported by senior staff or technical documentation, they can perform. But the industry is notorious for its bespoke solutions, snowflake workflows, and poor documentation. None of this ought to be a surprise. At least part of the allure of the cloud for many is that it carries the illusion of pushing problems to another team. Of course, this is hardly true. No company should abdicate architectural and operational responsibility entirely. But in our industry’s rush to new solutions, there are countless teams for which this was an unspoken objective. Regardless, what happens to companies when the people skilled enough to manage the complexity are no longer on call? Perhaps you’re a pessimist and feel that the next generation of IT pros is somehow less capable than in the past. The NASA engineers who landed a man on the moon may have similar things to say about today’s rocket scientists who rely heavily on tools to do the math for them.


A View on Understanding Non-Human Identities Governance

NHIs inherently require connections to other systems and services to fulfill their purpose. This interconnectivity means every NHI becomes a node in a web of interdependencies. From an NHI governance perspective, this necessitates maintaining an accurate and dynamic inventory of these connections to manage the associated risks. For example, if a single NHI is compromised, what does it connect to, and what would an attacker be able to access to laterally move into? Proper NHI governance must include tools to map and monitor these relationships. While there are many ways to go about this manually, what we actually want is an automated way to tell what is connected to what, what is used for what, and by whom. When thinking in terms of securing our systems, we can leverage another important fact about all NHIs in a secured application to build that map, they all, necessarily, have secrets. ... Essentially, two risks make understanding the scope of a secret critical for enterprise security. First is that misconfigured or over-privileged secrets can inadvertently grant access to sensitive data or critical systems, significantly increasing the attack surface. Imagine accidentally giving write privileges to a system that can access your customer's PII. That is a ticking clock waiting for a threat actor to find and exploit it.