Showing posts with label conflict. Show all posts
Showing posts with label conflict. Show all posts

Daily Tech Digest - January 30, 2026


Quote for the day:

"In my experience, there is only one motivation, and that is desire. No reasons or principle contain it or stand against it." -- Jane Smiley



Crooks are hijacking and reselling AI infrastructure: Report

In a report released Wednesday, researchers at Pillar Security say they have discovered campaigns at scale going after exposed large language model (LLM) and MCP endpoints – for example, an AI-powered support chatbot on a website. “I think it’s alarming,” said report co-author Ariel Fogel. “What we’ve discovered is an actual criminal network where people are trying to steal your credentials, steal your ability to use LLMs and your computations, and then resell it.” ... How big are these campaigns? In the past couple of weeks alone, the researchers’ honeypots captured 35,000 attack sessions hunting for exposed AI infrastructure. “This isn’t a one-off attack,” Fogel added. “It’s a business.” He doubts a nation-state it behind it; the campaigns appear to be run by a small group. ... Defenders need to treat AI services with the same rigor as APIs or databases, he said, starting with authentication, telemetry, and threat modelling early in the development cycle. “As MCP becomes foundational to modern AI integrations, securing those protocol interfaces, not just model access, must be a priority,” he said.  ... Despite the number of news stories in the past year about AI vulnerabilities, Meghu said the answer is not to give up on AI, but to keep strict controls on its usage. “Do not just ban it, bring it into the light and help your users understand the risk, as well as work on ways for them to use AI/LLM in a safe way that benefits the business,” he advised.


AI-Powered DevSecOps: Automating Security with Machine Learning Tools

Here's the uncomfortable truth: AI is both causing and solving the same problem. A Snyk survey from early 2024 found that 77% of technology leaders believe AI gives them a competitive advantage in development speed. That's great for quarterly demos and investor decks. It's less great when you realize that faster code production means exponentially more code to secure, and most organizations haven't figured out how to scale their security practice at the same rate. ... Don't try to AI-ify your entire security stack at once. Pick one high-pain problem — maybe it's the backlog of static analysis findings nobody has time to triage, or maybe it's spotting secrets accidentally committed to repos — and deploy a focused tool that solves just that problem. Learn how it behaves. Understand its failure modes. Then expand. ... This is non-negotiable, at least for now. AI should flag, suggest, and prioritize. It should not auto-merge security fixes or automatically block deployments without human confirmation. I've seen two different incidents in the past year where an overzealous ML system blocked a critical hotfix because it misclassified a legitimate code pattern as suspicious. Both cases were resolved within hours, but both caused real business impact. The right mental model is "AI as junior analyst." ... You need clear policies around which AI tools are approved for use, who owns their output, and how to handle disagreements between human judgment and AI recommendations.


AI & the Death of Accuracy: What It Means for Zero-Trust

The basic idea is that as the signal quality degrades over time through junk training data, models can remain fluent and fully interact with the user while becoming less reliable. From a security standpoint, this can be dangerous, as AI models are positioned to generate confident-yet-plausible errors when it comes to code reviews, patch recommendations, app coding, security triaging, and other tasks. More critically, model degradation can erode and misalign system guardrails, giving attackers the opportunity exploit the opening through things like prompt injection. ... "Most enterprises are not training frontier LLMs from scratch, but they are increasingly building workflows that can create self-reinforcing data stores, like internal knowledge bases, that accumulate AI-generated text, summaries, and tickets over time," she tells Dark Reading.  ... Gartner said that to combat the potential impending issue of model degradation, organizations will need a way to identify and tag AI-generated data. This could be addressed through active metadata practices (such as establishing real-time alerts for when data may require recertification) and potentially appointing a governance leader that knows how to responsibly work with AI-generated content. ... Kelley argues that there are pragmatic ways to "save the signal," namely through prioritizing continuous model behavior evaluation and governing training data.


The Friction Fix: Change What Matters

Friction is the invisible current that sinks every transformation. Friction isn’t one thing, it’s systemic. Relationships produce friction: between the people, teams and technology. ... When faced with a systemic challenge, our human inclination is to blame. Unfortunately, we blame the wrong things. We blame the engineering team for failing to work fast enough or decide the team is too small, rather than recognize that our Gantt chart was fiction, which is an oversimplification of a complex dynamic. ... The fix is to pause and get oriented. Begin by identifying the core domain, the North Star. What is the goal of the system? For Fedex, it is fast package delivery. Chances are, when you are experiencing counterintuitive behavior, it is because people are navigating in different directions while using the same words. ... Every organization trying to change has that guy: the gatekeeper, the dungeon master, the self-proclaimed 10x engineer who knows where the bodies are buried. They also wield one magic word: No. ... It’s easy to blame that guy’s stubborn personality. But he embodies behavior that has been rewarded and reinforced. ... Refusal to change is contagious. When that guy shuts down curiosity, others drift towards a fixed mindset. Doubt becomes the focus, not experimentation. The organization can’t balance avoiding risk with trying something new. The transformation is dead in the water.


From devops to CTO: 8 things to start doing now

Devops leaders have the opportunity to make a difference in their organization and for their careers. Lead a successful AI initiative, deploy to production, deliver business value, and share best practices for other teams to follow. Successful devops leaders don’t jump on the easy opportunities; they look for the ones that can have a significant business impact. ... Another area where devops engineers can demonstrate leadership skills is by establishing standards for applying genAI tools throughout the software development lifecycle (SDLC). Advanced tools and capabilities require effective strategies to extend best practices beyond early adopters and ensure that multiple teams succeed. ... If you want to be recognized for promotions and greater responsibilities, a place to start is in your areas of expertise and with your team, peers, and technology leaders. However, shift your focus from getting something done to a practice leadership mindset. Develop a practice or platform your team and colleagues want to use and demonstrate its benefits to the organization. Devops engineers can position themselves for a leadership role by focusing on initiatives that deliver business value. ... One of the hardest mindset transitions for CTOs is shifting from being the technology expert and go-to problem-solver to becoming a leader facilitating the conversation about possible technology implementations. If you want to be a CTO, learn to take a step back to see the big picture and engage the team in recommending technology solutions.


The stakes rise for the CIO role in 2026

The CIO's days as back-office custodian of IT are long gone, to be sure, but that doesn't mean the role is settled. Indeed, Seewald and others see plenty of changes still underway. In 2026, the CIO's role in shaping how the business operates and performs is still expanding. It reflects a nuanced change in expectations, according to longtime CIOs, analysts and IT advisors -- and one that is showing up in many ways as CIOs become more directly involved in nailing down competitive advantage and strategic success across their organizations. ... "While these core responsibilities remain the same, the environment in which CIOs operate has become far more complex," Tanowitz added. Conal Gallagher, CIO and CISO at Flexera, said the CIO in 2026 is now "accountable for outcomes: trusted data, controlled spend, managed risk and measurable productivity. "The deliverable isn't a project plan," Gallagher said. "It's proof that the business runs faster, safer and more cost-disciplined because of the operating model IT enables." ... In 2026, the CIO role is less about being the technology owner and more about being a business integrator, Hoang said. At Commvault, that shift places greater emphasis on governance and orchestration across ecosystems. "We're operating in a multicloud, multivendor, AI-infused environment," she said. "A big part of my job is building guardrails and partnerships that enable others to move fast -- safely," she said. 


Inside the Shift to High-Density, AI-Ready Data Centres

As density increases, design philosophy must evolve. Power infrastructure, backup systems, and cooling can no longer be treated as independent layers; they have to be tightly integrated. Our facilities use modular and scalable power and cooling architectures that allow us to expand capacity without disrupting live environments. Rated-4 resilience is non-negotiable, even under continuous, high-density AI workloads. The real focus is flexibility. Customers shouldn’t be forced into an all-or-nothing transition. Our approach allows them to move gradually to higher densities while preserving uptime, efficiency, and performance. High-density AI infrastructure is less about brute force and more about disciplined engineering that sustains reliability at scale. ... The most common misconception is that AI data centres are fundamentally different entities. While AI workloads do increase density, power, and cooling demands, the core principles of reliability, uptime, and efficiency remain unchanged. AI readiness is not about branding; it’s about engineering and operations. Supporting AI workloads requires scalable and resilient power delivery, precision cooling, and flexible designs that can handle GPUs and accelerators efficiently over sustained periods. Simply adding more compute without addressing these fundamentals leads to inefficiency and risk. The focus must remain on mission-critical resilience, cost-effective energy management, and sustainability. 


Software Supply Chain Threats Are on the OWASP Top Ten—Yet Nothing Will Change Unless We Do

As organizations deepen their reliance on open-source components and embrace AI-enabled development, software supply chain risks will become more prevalent. In the OWASP survey, 50% of respondents ranked software supply chain failures number one. The awareness is there. Now the pressure is on for software manufacturers to enhance software transparency, making supply chain attacks far less likely and less damaging. ... Attackers only need one forgotten open-source component from 2014 that still lives quietly inside software to execute a widespread attack. The ability to cause widespread damage by targeting the software supply chain makes these vulnerabilities alluring for attackers. Why break into a hardened product when one outdated dependency—often buried several layers down—opens the door with far less effort? The SolarWinds software supply chain attack that took place in 2020 demonstrated the access adversaries gain when they hijack the build process itself. ... “Stable” legacy components often go uninspected for years. These aging libraries, firmware blocks, and third-party binaries frequently contain memory-unsafe constructs and unpatched vulnerabilities that could be exploited. Be sure to review legacy code and not give it the benefit of the doubt. ... With an SBOM in hand, generated at every build, you can scan software for vulnerabilities and remediate issues before they are exploited. 


What the first 24 hours of a cyber incident should look like

When a security advisory is published, the first question is whether any assets are potentially exposed. In the past, a vendor’s claim of exploitation may have sufficed. Given the precedent set over the past year, it is unwise to rely solely on a vendor advisory for exploited-in-the-wild status. Too often, advisories or exploitation confirmations reach teams too late or without the context needed to prioritise the response. CISA’s KEV, trusted third-party publications, and vulnerability researchers should form the foundation of any remediation programme. ... Many organisations will leverage their incident response (IR) retainers to assess the extent of the compromise or, at a minimum, perform a rudimentary threat hunt for indicators of compromise (IoCs) before involving the IR team. As with the first step, accurate, high-fidelity intelligence is critical. Simply downloading IoC lists filled with dual-use tools from social media will generate noise and likely lead to inaccurate conclusions. Arguably, the cornerstone of the initial assessment is ensuring that intelligence incorporates decay scoring to validate command-and-control (C2) infrastructure. For many, the term ‘threat hunt’ translates to little more than a log search on external gateways. ... The approach at this stage will be dependent on the results of the previous assessments. There is no default playbook here; however, an established decision framework that dictates how a company reacts is key.


NIST’s AI guidance pushes cybersecurity boundaries

For CISOs, what should matter is that NIST is shifting from a broad, principle-based AI risk management framework toward more operationally grounded expectations, especially for systems that act without constant human oversight. What is emerging across NIST’s AI-related cybersecurity work is a recognition that AI is no longer a distant or abstract governance issue, but a near-term security problem that the nation’s standards-setting body is trying to tackle in a multifaceted way. ... NIST’s instinct to frame AI as an extension of traditional software allows organizations to reuse familiar concepts — risk assessment, access control, logging, defense in depth — rather than starting from zero. Workshop participants repeatedly emphasized that many controls do transfer, at least in principle. But some experts argue that the analogy breaks down quickly in practice. AI systems behave probabilistically, not deterministically, they say. Their outputs depend on data that may change continuously after deployment. And in the case of agents, they may take actions that were not explicitly scripted in advance. ... “If you were a consumer of all of these documents, it was very difficult for you to look at them and understand how they relate to what you are doing and also understand how to identify where two documents may be talking about the same thing and where they overlap.”

Daily Tech Digest - January 24, 2026


Quote for the day:

"Definiteness of purpose is the starting point of all achievement." -- W. Clement Stone



When a new chief digital officer arrives, what does that mean for the CIO?

One reason the CDO can unsettle CIOs is that the title has never had a consistent meaning. Isaac Sacolick, president and founder of StarCIO, said organizations typically create the role for one of two reasons. "Some organizations split off a CDO role because the CIO is overly focused on infrastructure and operations, and the business's customer and employee experiences, AI and data initiatives, and other innovations aren't meeting expectations," Sacolick said. "In other organizations, the CDO is a C-level title for the head of product management and UX/design functions, and reports to the CIO." Those two models lead to very different outcomes. In the first, the CDO is positioned as a corrective measure; in the second, the role is an extension of the CIO's broader operating model. Without clarity on which model is being pursued, confusion tends to follow. ... Across the experts, there was strong agreement on one point: The CIO remains central to the enterprise digital operating model, even as new roles emerge. "CIOs need to own the digital operating model and evolve it for the AI era," Sacolick said, noting that this increasingly involves "product-centric, agile, multi-disciplinary team organizational models." Ratcliffe echoed that sentiment, emphasizing accountability and trust. "The CIO should be the single point of ownership with the deep expertise feeding into it so there is consistency, business acumen and trust built within the technology function," he said.


Responsible AI moves from principle to practice, but data and regulatory gaps persist: Nasscom

The data shows a strong correlation between AI maturity and responsible practices. Nearly 60% of companies that say they are confident about scaling AI responsibly already have mature RAI frameworks in place. Large enterprises are leading this transition, with 46% reporting mature practices. Startups and SMEs trail behind at 16% and 20% respectively, but Nasscom sees this as ecosystem-wide momentum rather than a gap, given the growing willingness among smaller firms to learn, comply, and invest. ... Workforce enablement has become a central pillar of this transition. Nearly nine out of ten organisations surveyed are investing in sensitisation and training around Responsible AI. Companies report the highest confidence in meeting data protection obligations—reflecting relatively mature privacy frameworks—but monitoring-related compliance continues to be a concern. Accountability for AI governance still sits largely at the top. ... As AI systems become more autonomous, Responsible AI is increasingly seen as the deciding factor for whether organisations can scale with confidence. Nearly half of mature organisations believe their current frameworks are prepared to handle emerging technologies such as agentic AI. At the same time, industry experts caution that most existing frameworks will need substantial updates to address new categories of risk introduced by more autonomous systems. The report concludes that sustained investment in skills, governance mechanisms, high-quality data, and continuous monitoring will be essential.


AI-induced cultural stagnation is no longer speculation − it’s already happening

Regardless of how diverse the starting prompts were – and regardless of how much randomness the systems were allowed – the outputs quickly converged onto a narrow set of generic, familiar visual themes: atmospheric cityscapes, grandiose buildings and pastoral landscapes. Even more striking, the system quickly “forgot” its starting prompt. ... For the past few years, skeptics have warned that generative AI could lead to cultural stagnation by flooding the web with synthetic content that future AI systems then train on. Over time, the argument goes, this recursive loop would narrow diversity and innovation. Champions of the technology have pushed back, pointing out that fears of cultural decline accompany every new technology. Humans, they argue, will always be the final arbiter of creative decisions. ... The study shows that when meaning is forced through such pipelines repeatedly, diversity collapses not because of bad intentions, malicious design or corporate negligence, but because only certain kinds of meaning survive the text-to-image-to-text repeated conversions. This does not mean cultural stagnation is inevitable. Human creativity is resilient. Institutions, subcultures and artists have always found ways to resist homogenization. But in my view, the findings of the study show that stagnation is a real risk – not a speculative fear – if generative systems are left to operate in their current iteration. 



Europe votes to tackle deep dependence on US tech in sovereignty drive

The depth of European reliance on foreign technology providers varies across sectors but remains substantial throughout the stack. In cloud infrastructure alone, Amazon, Microsoft, and Google command 70% of the European market, while local providers including SAP, Deutsche Telekom, and OVHcloud collectively hold just 15%. ... “Recent geopolitical tensions show that the issue of Europe’s digital sovereignty is of the utmost importance,” MichaÅ‚ Kobosko, the Renew Europe MEP who negotiated the report text, said in a statement. “If we do not act now to reduce Europe’s technological dependence on foreign actors, we run the risk of becoming a digital colony.” ... “Due to geopolitical tensions, the driver has shifted to reducing foreign digital dependency across the entire technology stack. European CIOs are now tasked with redesigning their approach to semiconductors, cloud, software, and AI, upending two decades of established strategy. It’s not going to be easy, it’s not going to be cheap, and it’s going to span multiple generations of CIOs.” When asked whether European enterprises will see viable sovereign alternatives across core technology areas, Henein said: The answer is yes, but the time horizon is potentially more than a decade. Europe has been supporting US technology providers through licensing agreements for the better part of the last two decades. ... A key question is whether the report’s proposed preferential procurement policies can actually change market realities, given the 


One-time SMS links that never expire can expose personal data for years

One of the most significant findings involved how long these links remained active. All 701 confirmed URLs still worked when the researchers accessed them, often long after the original message was sent. More than half of the exposed links were between one and two years old. About 46% were older than two years. Some dated back to 2019. Public SMS gateways rarely retain messages for that long, which suggests that the actual lifetime of many links may extend even further. The risk starts as soon as a private link is exposed, but it grows with time. The longer a link stays active, the more chances there are for abuse through logs, forwarding, compromised devices, message interception, phone number recycling, or third-party access. ... In many services, the link carried a token passed to backend APIs. Some pages rendered data server side, while others fetched information after load. Only five services placed personal data directly inside the URL itself, though access results were similar once the link was opened. This design assumes the link remains private. According to Danish, product pressure plays a central role in keeping this pattern widespread. ... In one case, an order tracking page displayed an address, while API responses included phone numbers, geolocation data, and driver details. In another, a loan service returned bank routing numbers and Social Security numbers that were only visible in network logs. This data became reachable as soon as the link was opened, even before the page finished loading. 


How enterprise architecture and start-up thinking drive strategic success

Strategy is now judged less by the quality of vision decks and more by how quickly enterprises can test, learn and scale what works and is valuable. To beat the heat, enterprises increasingly combine the discipline of enterprise architecture with the speed and adaptability associated with a start-up mindset. ... Modern enterprise architecture is less about cataloging systems and more about shaping how an enterprise senses opportunities, mobilizes resources and transforms at pace. In a high-performing enterprise, it acts as a bridge between strategy and execution in three concrete ways, i.e., alignment and clarity, transparency and risk management and decision support and adaptive governance. ... Start-ups and scale-ups operate under uncertainty, but they thrive by learning in short cycles, minimizing waste and scaling only what demonstrates traction. When large enterprises infuse enterprise architecture with similar principles, the function becomes a multiplier for speed rather than a constraint. ... Cross-functional innovation and flexible governance complete the picture. In many enterprises, architects now embed directly in domain or platform teams, joining strategic backlog refinement, incident reviews and design sessions as peers. In a large healthcare network, for instance, enterprise architecture practitioners joined clinical, operations and analytics teams to co-design a data platform that could support both operational reporting and AI-driven decision support.


From Conflict To Collaboration: How Tension Can Strengthen Your Team

Letting tensions simmer is one of the most common leadership mistakes. The longer a disagreement sits in the corner, the more toxic it becomes. ... Teams function better when they normalize honest conversation before things go sideways. A simple practice—opening meetings with "wins and worries"—creates a habit of surfacing concerns early. Netflix cofounder Reed Hastings echoes this principle: "Only say about someone what you will say to their face." It’s a powerful expectation. Candor reduces gossip, eliminates guesswork and gives leaders clarity long before emotions get out of hand. ... When conflict arises, people don’t immediately need solutions. What they need is to feel heard. It’s vital to fully understand their concerns so there is no ambiguity. Repeat your understanding of their position before giving your input. It’s remarkable how much progress can be made when people feel genuinely heard. ... Compromise has an unfair reputation in business culture, as if giving an inch signals defeat. In practice, it’s a recognition that multiple perspectives may hold merit. Good leaders invite both sides to walk through their rival viewpoints together. When people better understand the context behind each position, they’re far more willing to find common ground that moves the team forward. ... Many conflicts resurface not because the solution was wrong, but because leaders assumed the first conversation fixed everything. 


Six tips to gain control over your cloud spending

The first step any organization should take before shifting a workload to the cloud is performing proper due diligence on ROI. It isn’t always the case that moving workloads to the cloud will translate into financial savings. Many variables should be considered when calculating ROI, including current infrastructure, licensing and hiring. ... A formal cloud governance framework establishes rules, policies, and processes that formalize how cloud resources will be accessed, used, and retired. Accurately matching cloud resources to workload demands improves resource utilization and minimizes waste. ... FinOps, short for financial operations, is a management discipline that involves collaboration between finance, operations and development teams to manage cloud spending. By implementing tools and processes for cost tracking, budgeting, and forecasting, businesses can gain insights into their cloud expenses and identify areas for optimization. ... Providers offer a variety of discounts that can significantly reduce cloud costs. For example, reserved instance pricing models offer discounts to customers who reserve cloud resources over a fixed period. Some providers offer tiered pricing models in which the cost per unit decreases as you consume more resources. ... You may find that moving some workloads to the cloud offers no significant performance advantages. Repatriating some applications, data and workloads back to on-premises infrastructure can often improve performance while reducing cloud spending.


These 4 big technology bets will reshape the global economy in 2026

The impact of disruptive technologies will have a material impact on real GDP growth. ARK suggested that capital investment alone, catalyzed by disruptive innovation platforms, could add 1.9% to annualized real GDP growth this decade. Each innovation platform, AI, public blockchains, robotics, energy storage, and multiomics, should provide a structural boost to global growth. ... According to ARK research, hyperscalers are expected to spend more than $500 billion on capital expenditures (Capex) in 2026, nearly four times the $135 billion spent in 2021, the year before the launch of ChatGPT in 2022. ... ARK forecasted that AI agents could facilitate more than $8 trillion in online consumption by 2030. ARK noted that as consumers delegate more decisions to intelligent systems, AI agents should capture an increasing share of digital transactions, from 2% of online spend in 2025 to around 25% by 2030 ... AI agents are becoming more productive. ARK found that advances in reasoning capability, tool use, and extended context are driving an exponential increase in the capability of AI agents. The duration of tasks these agents can complete reliably increased 5 times, from six minutes to 31 minutes, in 2025. ... ARK suggested robots are a growing part of the labor force and took a historical look at productivity and labor hours. As productivity increased, each hour of labor became more valuable, enabling increased output with fewer hours, as living standards continued to rise


Half of agentic AI projects are still stuck at the pilot stage

The main barriers to full implementation, respondents said, are concerns with security, privacy, or compliance, cited by 52%, followed by technical challenges to managing agents at scale, at 51%. “Organizations are not slowing adoption because they question the value of AI, but because scaling autonomous systems safely requires confidence that those systems will behave reliably and as intended in real-world conditions,” said Alois Reitbauer, chief technology strategist at Dynatrace. Seven-in-ten agentic AI–powered decisions are still verified by humans, and 87% of organizations are actively building or deploying agents that require human supervision. ... A recurring pain point for enterprises tinkering with agentic AI tools lies in observability, according to Dynatrace. Observability of these autonomous systems is needed across every stage of the life cycle, from development and implementation through to operationalization. Observability is most used in implementation, at 69%, followed by operationalization at 57% and development at 54%. “Observability is a vital component of a successful agentic AI strategy. As organizations push toward greater autonomy, they need real-time visibility into how AI agents behave, interact, and make decisions,” Reitbauer said. “Observability not only helps teams understand performance and outcomes, but it provides the transparency and confidence required to scale agentic AI responsibly and with appropriate oversight.”

Daily Tech Digest - May 02, 2023

Is misinformation the newest malware?

"When we were thinking about the risks of Twitter being targeted by, let's say, the Russian government, we always had to recognize that there would be attempts to get into Twitter's systems and target the company and exfiltrate user data," Roth said. "There would be attempts to influence the conversations happening on the platforms, and there would be attempts to compromise the accounts of Twitter's users. There were multiple layers to each of these things. And Twitter as a company had a role to play in addressing that conduct across each one of those levels.” Roth pointed to the "great Twitter hack of 2020," when financially motivated people in their twenties compromised a Twitter employee's account to promote a crypto scam on high-profile accounts. This incident is an example of what he called the "illusory distinction" between malware and misinformation. "This was targeting Twitter's employees to gain access to Twitter's backend systems in order to carry out malicious activity propagated across the social network. You cannot think of these problems in isolation," Roth said.


Just Who Exactly Should Take Responsibility for Application Security?

We talk a great deal about shifting left and putting it on individuals. But if developers’ goals and incentives don’t include security, they won’t do it. Humans act in their own interests and unless their interests are made to be something different, they’re going to behave how they want to behave. If a company wants to secure code, it’s on them to put in place the standards, enforce the standards, and actually care and invest. Companies that don’t do those things will never be secure and are basically just setting up people to fail. Companies have to get their priorities right and invest in the tools and training that empowers developers to perform robust security. …But they do need to be engaged There are things that development managers can do to introduce more security in a reasonable way that doesn’t cost a ton of extra time and money. Importantly, they can lead by encouraging developers to take reasonable steps that will help. For instance, when introducing a new library, don’t introduce anything that’s got a known vulnerability, kind of a “do no harm” approach.


Why We Should Establish Guardrails For Artificial General Intelligence Now

Weizenbaum’s fears show that ethical concerns over computers’ capabilities are nothing new. As we enter the exciting age of AGI-led possibilities, perhaps we should take lessons from what happened with social media platforms. When applications like MySpace, Facebook and the like first launched, they were touted as a means to bring people together and enable self-expression through personal posts and photo sharing. The platforms’ intent was to connect people in a convenient, friendly way. What the platforms’ founders didn’t envision is that one day, these networks would bombard members with annoying advertisements that creepily follow them around. They didn’t worry that they were asking members to give their most personal details to large corporations or possibly even governments (e.g., TikTok). They didn’t expect that disinformation would interfere in elections or that children would be bullied or view harmful content. As a result, the operations of these social platforms are now under question and they might face government regulation if they can’t gain control over content and data privacy.


Your decommissioned routers could be a security disaster

Often, they included network locations and some revealed cloud applications hosted in specific remote data centers, “complete with which ports or controlled-access mechanisms were used to access them, and from which source networks.” Additionally, they found firewall rules used to block or allow certain access from certain networks. Often specifics about the times of day they could be accessed were available as well. “With this level of detail, impersonating network or internal hosts would be far simpler for an attacker, especially since the devices often contain VPN credentials or other easily cracked authentication tokens,” according to the white paper. The routers—four Cisco ASA 5500 Series, three Fortinet Fortigate Series, and 11 Juniper Networks SRX Series Service Gateways—were all bought legally through used-equipment vendors, according to the paper. “No procedures or tools of a primarily forensic or data-recovery nature were ever employed, nor were any techniques that required opening the routers’ cases,” yet the researchers said they were able to recover data that would be “a treasure trove for a potential adversary—for both technical and social-engineering attacks.”


5 surefire ways to derail a digital transformation (without knowing it)

Digital transformations can start with one initiative, defined goals, and a dedicated team. But CIOs are under pressure to accelerate and find digital transformation force multipliers. That means growing the number of leaders and teams that can plan innovations and deliver transformative impacts. “Innovation does not happen in isolation: It occurs when organizations encourage and nurture it, often with processes to enable nontraditional ways of thinking, working, and the space to try out ideas in a safe environment,” says Hasmukh Ranjan, CIO of AMD. Here’s how I spot derailments: Ask initiative leaders to share access to their roadmaps, agile backlogs, collaboration tools, stakeholder communications, and internal documentation. ... Subject matter experts and internal stakeholders should be contributors to priorities and requirements, not decision-makers or backlog dictators. Digital transformations derail when CIOs miss the opportunity to establish and communicate product management responsibilities for creating and evolving market- and customer-driven roadmaps.


IS Audit in Practice: Advantages of Technology in Achieving Diversity

The benefits of diversity have long been sought after by schools of management. Diverse styles produce a broad range of ideas and approaches, which can translate to a more cohesive work environment and create a competitive edge that impacts the bottom line. Diverse work teams with inclusive mindsets can bridge gaps in understanding that help avoid rework. The classic example is strong collaboration between IT and the business, where post-development user acceptance testing (UAT) produces a go-live outcome that satisfies users. Diverse teams also make it easier to reach a wider audience by creating products and services that are broadly appealing. Technology helps make these products and services more ubiquitous. If diversity can bring such advantages, why is it so hard to achieve? The terms "unconscious bias," "the boys’ club," "cliques" and "the inner circle" suggest that work and social groups form around what is familiar. ... Breaking away from the known and comfortable to include new approaches and different individuals can feel risky, as any change does for those accustomed to operating within established boundaries.


The role of AI as an everyday life assistant

One of the concerns the book raises is how businesses experienced in selling to humans will respond. There is no reason to assume the machine will remain in the domain of low-value purchasing, leaving businesses free to focus their efforts on high-value human customers. “Doubling down on the human market and perceived higher-value human customer service capabilities, the losers will find their cost of sale gradually increasing even as their revenue and total addressable market appears to shrink,” warn Raskino and co-author Don Scheibenreif. Society may not yet be ready for the machine customer, but the idea is finding its way into people’s lives by automating boring or repetitive tasks. In the book, Raskino and Scheibenreif discuss the May 2018 demonstration by Google CEO Sundar Pichai of an AI assistant called Duplex. The AI was so convincing that it was able to book an appointment at a hair salon over the telephone, without the person on the other end of the line being aware that it was a machine making the appointment.


Data infrastructure: The picks and shovels of the AI gold rush

While AI models form the cornerstone of this recent progress, scaling AI requires a robust data foundation that trains models and serves them effectively. This process involves collecting and storing raw data, utilizing computational power to transform data and train models, and processing and ingesting data in real-time for inference. Ultimately, turning raw data into AI insights in production is complex and dependent on having strong data infrastructure. Data engineering teams will play a crucial role in enabling AI and must lean into an ever-improving set of tools to address rapidly growing volumes of data, larger models, and the need for real-time processing and movement of data. Data infrastructure has transformed over the past decade irrespective of AI, driven by the shift to the cloud and a greater focus on analytics. This transformation has created huge commercial successes with the likes of Snowflake, Databricks, Confluent, Elastic, MongoDB, and others. Today, we are in a moment in time where storage and compute limitations have largely been erased thanks to the cloud.


Why platform engineering?

While simple in concept, platform engineering isn’t trivial to execute because it requires a product development mindset. Platform engineers must develop a product that agile development teams want to consume, and developers must let go of their desires for DIY (do it yourself) devops approaches. One place to start is infrastructure and cloud provisioning, where IT can benefit significantly from standards, and developers are less likely to have application-specific architectural requirements. Donnie Berkholz, senior vice president of product management at Percona, says, “Platform engineering covers how teams can deliver the right kind of developer experience using automation and self-service, so developers can get to writing code and implementing applications rather than having to wait for infrastructure to be set up based on a ticket request.” Therein lies the customer pain point. If I am a developer or data scientist who wants to code, the last thing I want to do is open a ticket for computing resources. But IT and security leaders also want to avoid having developers customize the infrastructure’s configuration, which can be costly and create security vulnerabilities.


3 Ways To Manage Conflict In The Workplace

If you’re experiencing a conflict, you might spend some time digging into all the possible root causes of the conflict you’re currently dealing with that may be different from your initial perception. In writing down the possibilities or alternatives, you just might find that the conflict you thought you were struggling with isn’t what the conflict is actually about. This is an exercise to get to the heart of the matter, because we can’t solve for what we don’t even realize exists. ... Justification is often what keeps us stuck in conflict, according to conflict and collaboration consultant Cair Canfield. Conflict can keep us stuck if our egos want us to remain blameless, like we don’t have any part in the problem and so we don’t have to change. But it doesn’t really serve you, because you’ll keep doing the same thing in the same way, rather than be able to move forward productively. ... nstead of immediately shutting down an idea because you disagree with it, ask questions. You might ask, ‘What in your life has shaped your viewpoint?’ Being curious about why the other person sees things the way they do helps your brain to stay open to new information, while being defensive can make you less open minded.



Quote for the day:

"Blessed are the people whose leaders can look destiny in the eye without flinching but also without attempting to play God" -- Henry Kissinger

Daily Tech Digest - April 20, 2019

How to reconstruct your business’s value chain for the digital world

How to reconstruct your business̢۪s value chain for the digital world
What’s the big advantage of digital? It allows you to disconnect yourself from physical constraints. With uber, you no longer have to be in the street to hail a cab. You can order a cab from anywhere. If you digitize the supply-chain process, you are no longer linking the production of the product to one physical location. In the analog world, a person would check the inventory and write an order for supplies. When there was a spike in demand, that person would call more people and write more orders for more supplies. But in the digital world, you can create a manufacturing process where your inventory, recipes, and prices are all available on a digitized, harmonized ecosystem. When demand spikes, you can turn the dial on your [robotic process automation] RPA tool. When we digitize and harmonize complex business processes, we no longer have to call a guy who orders a part. Instead, you have a view into the inventory across multiple suppliers. The CIO has a unique and critical role in digital transformation, as long as they don’t fall into a few common traps. One such trap is when the CEO throws money at you and tells you, “Bring me this shiny new technology.”


Why Enterprise-Grade Cybersecurity Needs a Federated Architecture

A federated architecture combines the strengths of centralized and distributed and is, therefore, a kind of “best of both worlds” approach. With federated, a controller is placed in each data center or public cloud region (just like distributed), but those multiple controllers act in concert so as to provide the abstraction that there is one centralized controller. All of the controllers in a federated architecture communicate with each other to share information about the organization’s security policy as well as the workloads that are being secured. This type of architecture is the best when it comes to securing global infrastructure at scale. And, as is typically the case when writing enterprise-grade software, making the right architectural choice and then implementing it in an elegant way required our architects and engineers to spend a little more time and be a little more thoughtful. Our ultimate goal was to deliver an enterprise-scale architecture that delivered the benefits of a federated architecture without the downsides of distributed and centralized.


Ready for 6G? How AI will shape the network of the future


Take the problem of coordinating self-driving vehicles through a major city. That’s a significant challenge, given that some 2.7 million vehicles enter a city like New York every day. The self-driving vehicles of the future will need to be aware of their location, their environment and how it is changing, and other road users such as cyclists, pedestrians, and other self-driving vehicles. They will need to negotiate passage through junctions and optimize their route in a way that minimizes journey times. That’s a significant computational challenge. It will require cars to rapidly create on-the-fly networks, for example, as they approach a specific junction—and then abandon them almost instantly. At the same time, they will be part of broader networks calculating routes and journey times and so on. “Interactions will therefore be necessary in vast amounts, to solve large distributed problems where massive connectivity, large data volumes and ultra low-latency beyond those to be offered by 5G networks will be essential,” say Stoica and Abreu.


IT Governance 101: IT Governance for Dummies, Part 2

One of the powerful aspects of COBIT is that it acts as the glue between governance and management, describing both governance and management processes. Its concept of cascading enterprise goals to IT goals to enabler goals and metrics ensures consistent communication and alignment. These enablers such as Processes are where all the IT management frameworks can be plugged in, helping to give the frameworks a business context and ensuring that they focus on delivering value and outcomes, not just outputs. As stated by one expert in the UAE, “I think often because organizations do not do a goals cascade things feel disconnected and orphaned, but once you do a proper goals cascade you can see and feel the interconnection and how goals are interdependent on each other to achieve the enterprise-level goals. ... Clearly, these exploding business demands for new benefits exist and, at the same time, IT is expected to make everything secure, replace all that legacy stuff that is slowing down the Ubering, and stop IT from breaking as well.


Some internet outages predicted for the coming month as '768k Day' approaches

World map globe cyber internet
The good news is that network admins have known about 768k Day for a long time, and many have already prepared, either by replacing old routers with new gear or by making firmware tweaks to allow devices to handle global BGP routing tables that exceed even 768,000 routes. "Yes, TCAM memory settings can be adjusted to help mitigate, and even go beyond 768k routes on some platforms, which will work if you don't run IPv6. These setting changes require a reboot to take effect," Troutman said. "The 768k IPv4 route limit is only a problem if you are taking ALL routes. If you discard or don't accept /24 routes, that eliminates half the total BGP table size. "The organizations that are running older equipment should know this already, and have the configurations in place to limit installed prefixes. It is not difficult," Troutman added. "I have a telco ILEC client that is still running their network quite nicely on old Cisco 6509 SUP-720 gear, and I am familiar with others, too," he said.


Bots Are Coming! Approaches for Testing Conversational Interfaces

When testing such interfaces, natural language is the input and we humans really love having alternatives and love our synonyms and our expressions. Testing in this context moves from pure logic to something close to fuzzy logic and clouds of probabilities. As they are intended to provide a natural interaction, testing conversational interfaces also requires a great deal of empathy and understanding of the human society and ways of interacting. In this area, I would include cultural aspects, including paraverbal aspects of speech (that is all communication happening beside the spoken message, encoded in voice modulation and level). These elements provide an additional level of complexity and many times the person doing the testing work needs to consider such aspects. I believe it’s fair to say that testing a conversational interface can be also be seen as tuning, so that it passes a Turing test. Another challenge faced when testing such interfaces is the distributed architecture of systems.


Protecting smart cities and smart people

spinning globe smart city iot skyscrapers city scape internet digital transformation
For as long as most can remember, information security was a technology concern, handled by technologists, and discussed by security engineers and associated professionals. The security vendors presented at security conferences, the security professional attended accordingly, Cat people with cat people. You know how it goes. Within a Smart city eco- system, we need to extend the cyber conversation beyond the traditional players. How do we make the City Planner appreciate what we understand? How do we share and apply security best practices to an engineering company providing a Building Information Modelling (BIM) service to a Hospital or Defence project? Moreover, how do we, in the first instance highlight the security concerns? Attending and speaking at numerous cyber conferences I sometimes wonder, is this the right audience? In this digital eco-system, we should be speaking to civic and government leaders about our security concerns facing smart cities and critical infrastructure, not exclusively to other security professionals. They are well aware of the challenges and the resistance experienced.


Don't underestimate the power of the fintech revolution

According to Bank of England Governor Mark Carney, FinTech’s potential is to unbundle banking into its core functions - such as settling payments and allocating capital. For central bankers and regulators who are monitoring the sector, the growth of fintech is akin to any other disruptive technology - that is, will it lead to financial instability? Most fintech start-ups are not regulated as much as traditional financial institutions. So far, it’s the more open financial markets that have seen fintech develop rapidly. One example is the e-payment system M-Pesa, which operates in Kenya, Tanzania and elsewhere, and is one of the biggest fintech success stories since its emergence just a decade ago. By effectively transforming mobile phones into payment accounts, M-Pesa has increased financial access for previously unbanked people. The permissive stance of the Kenyan central bank allowed the sector to develop rapidly in one of East Africa’s most developed economies.


Data Breaches in Healthcare Affect More Than Patient Data

Data Breaches in Healthcare Affect More Than Patient Data
Cybercriminals go after any data they perceive to be valuable, says Rebecca Herold, president of Simbus, a privacy and cloud security services firm, and CEO of The Privacy Professor consultancy. "Payroll data contains a wide range of really valuable data that cybercrooks can sell to other crooks for high amounts," she says. "With the growing number of pathways into healthcare systems and networks ... that are being established through employee-owned devices, through third parties/BAs, and through IoT devices, I believe that such fraud is increasing because of the many more opportunities that crooks have now to commit these types of crimes." The recent attacks on Blue Cross of Idaho and Palmetto Health spotlight the importance for healthcare entities to diligently safeguard all data, says former healthcare CISO Mark Johnson of the consultancy LBMC Information Security. The attacks "underscore for me that the healthcare industry needs to protect the entire environment, not just their large systems like the EMR," he says.


Why Your DevOps Is Not Effective: Common Conflicts in the Team

In the DNA of DevOps culture lies the principle of constant and continuous interaction as well as collaboration between different people and departments. The key reason for this is a much greater final efficiency and a much smaller time-to-market compared to the traditional approach. Proper implementation of DevOps shifts the focus from personal effectiveness to team efficiency. At the same time, due to automation and the widespread introduction of monitoring and testing, it is possible to track the occurrence of a problem at the early stages, as well as quickly find the causes of problems. Building the right culture in the organization is important, and it does not depend on DevOps directly: problems occur in all companies, but in an organization, with the right culture all the forces will be thrown at solving the problem and preventing it in the future, rather than searching for the guilty side and punishing.



Quote for the day:


"Leaders are more powerful role models when they learn than when they teach." -- Rosabeth Moss Kantor


December 23, 2014

CIO interview: Catherine Doran, CIO, Royal Mail
According to Doran, one main concern during the recruitment exercise was avoiding a "scattergun approach". Given that it was an extended campaign, the last thing the CIO wanted was seeing job applications dwindle because of a possible impression that something was wrong. The solution to that risk was driving targeted campaigns to different communities using LinkedIn. "LinkedIn was a big deal for us, to be honest. When we were looking for architects we would target that community, do a campaign with them for a bit, then we wouldn’t do anything with them for a while. Then we’d release a set of jobs to, for example, testing professionals, then programme and project management people and so on," Doran says.


Using the Open FAIR Body of Knowledge with Other Open Group Standards
The Open FAIR Body of Knowledge provides a model with which to decompose, analyze, and measure risk. Risk analysis and management is a horizontal enterprise capability that is common to many aspects of running a business. Risk management in most organizations exists at a high level as Enterprise Risk Management, and it exists in specialized parts of the business such as project risk management and IT security risk management. Because the proper analysis of risk is a fundamental requirement for different areas of Enterprise Architecture (EA), and for IT system operation, the Open FAIR Body of Knowledge can be used to support several other Open Group standards and frameworks.


Conflict and Resolution in the Agile World
Collaboration means conflict: Any time more than one person works on a problem, there will be disagreements about how to solve it. Whether you disagree over methodology, philosophy, tools, technology, personality or even the basic understanding of the problem, you will have to work through your disagreements to get to a solution. The more people that work together, the harder it is to get consensus. Transparency means conflict: Agile practices place a premium on transparency. Transparency allows problems to surface and be squashed. Without transparency, problems can fester, grow and ultimately become insurmountable. But with the good comes the bad. With increased transparency, there is also an opportunity for more disagreements, and conflict within the team and with external stakeholders.


Success of Health IT Rests With Business Alignment
Some in the medical community suggest that EHRs and other health IT systems would be most effective if they were to fade into the background and minimize the interaction required with the care provider. Kavita Patel, managing director of clinical transformation at the Engelberg Center for Healthcare Reform, says that practitioners would welcome technologies like motion-capture gesturing systems that would "do away with the computer in the room." "Any of these workarounds or kind of 'life hacks' that I think we can do in clinical medicine are probably something that every physician or every clinician who sees patients would want millions of," she says. "So there's an entrepreneurial mission waiting to happen."


Getting Your Data House in Order
When we talk about getting our houses in order, sometimes we mean our financials, relationships, or our actual house. What about an organization’s data house? I see many correlations between data problems and companies’ lack of organization. When I talk about getting our data house in order, I am talking about the nitty-gritty of solid data governance practice. Much has been written and discussed on the principles and frameworks of data governance, but sometimes the mechanics of making data decisions are overlooked. To me, it is a matter of embedded organization practice.



The Power of Cloud Computing
“Arguably the most essential aspect of the Cloud is its ability to provide an integration of nearly limitless numbers of data sources involving structured, semi-structured, and unstructured data,” Dataversity’s Jelani Harper writes. “Such integration spans geographic location and includes both on-premise and Cloud sources, and is frequently typified by a speed of access that comes in real time or close to real time.” Obviously, that’s not something that would be cheap or easy or maybe even possible with traditional data management tools, she adds. The article includes three sample use cases that show off cloud computing’s mad data integration skills.


5 things you should know about DDoS attacks, outages, SSL, and web performance
Last week at Radware, we released our annual Global and Network Security Report. This report is based on data gathered from a survey of 330 organizations worldwide. The survey was designed to collect objective, vendor-neutral information about the issues organizations face when preparing for and fighting against cyberattacks. The report gives a comprehensive and objective review of the past year’s cyberattacks from both a business and a technical perspective. It also offers best practice advice for organizations when planning for cyberattacks in 2015. But my favourite aspect of this report is the fascinating play-by-play insight into how today’s sophisticated attacks take place.


20 Netstat Commands for Linux Network Management
netstat (network statistics) is a command line tool for monitoring network connections both incoming and outgoing as well as viewing routing tables, interface statistics etc. netstat is available on all Unix-like Operating Systems and also available on Windows OS as well. It is very useful in terms of network troubleshooting and performance measurement. netstat is one of the most basic network service debugging tools, telling you what ports are open and whether any programs are listening on ports.


2015: The Year of the Compliance-Created Cyber Confidence Collapse?
The biggest security risk now faced by employers is not outside hackers. It is compliance experts who stay just long enough to help you tick the latest regulatory boxes, having acquired the necesary understanding of your systems and security credentials necessary to do so. The drive by the European Commission to address supposed "data protection" problems, supported by the US obsession with "Data Breach Notification", could not have done a better job in opening up opportunities for serious fraud (both high value and mass market) if they had been actively planned by organised crime.


Charlatans: The new wave of privacy profiteers
Within two days the Kickstarter project, which began at $7500, blew up into a $600,000 funding sensation. It also drew enough attention to Germar's dangerously false promises that Germar's con unraveled, fast. Within a week of all the great PR, funders began withdrawing their dollars in droves, and public outcry pushed Kickstarter to suspend Anonabox's funding campaign. But not before things got quite ridiculous -- in large part due tothis blistering Reddit thread. As it turned out, Germar's custom open source hardware product wasn't custom, or open source. Thanks to infosec community chatter on Twitter and the Reddit thread, funders and observers discovered Anonabox's entire hardware package was actually an off-the-shelf Chinese router.



Quote for the day:

“You will never see an eagle of distinction flying low with pigeons of mediocrity.” -- Onyi Anyado

January 01, 2013


9 IT Career Resolutions for 2013
The New Year is almost here. 2012 proved to be a tumultuous year for IT pros and 2013 is looking to be just as challenging. However, the IT job market has been slowly gaining strength and as more companies start growing and adding staff, employees who have been waiting for this turnaround are getting ready to spring into action. Here's CIO.com list of nine career-related resolutions to make for 2013.


Technology teaser: Who will call it right in 2013?
The world didn't end, the turkey has reached the dubious curry stage of its lifecycle, and the the tree has dropped needles all over the shag pile. This can mean only one thing. It is time for part one of the traditional Technology of Business lookahead to what 2013 holds. So select the device of your choice and settle down to find out what our experts have to say.


Enterprise Architecture -- The Missing Ingredient for CIO Success?
The best road to being a successful CIO, in my opinion, is by being a successful enterprise architect. More attention needs to be paid to enterprise architecture by CIOs and those aspiring to be CIOs. Being a great business person, communicator and leader is far from being enough to be a successful and “highest performing” CIO. It is time to embrace enterprise architecture, including the critical role it has to play in IT modernization.


Automatic encryption of secure form field data
Karl Stoney shows a simple method of automatically encrypting hidden form fields that you don't want the user to be able to change, or know the value of. An extension to the HtmlHelper, a custom ModelBinder is used to handle the decryption and also Rijndael encryption is used to secure data . That this is simply one measure to ensure the security of data, and should always validate the action at the code and finally database level, to ensure a secure application!


Microsoft Industry Reference Architecture for Banking (MIRA-B)
MIRA-B, a development and delivery framework that provides clarity on technical capabilities and implementation approaches to support enterprise IT architecture planning, helps financial institutions modularize and align business and technology assets in a predictable way. MIRA-B outlines a roadmap for the future that provides the architectural flexibility to deliver industry solutions on-premise or in the cloud.


IBM’s Reference Architecture for Creating Cloud Environments [Updated]
IBM has recently submitted the IBM Cloud Computing Reference Architecture 2.0 (CC RA) (.doc) to the Cloud Architecture Project of the Open Group, a document based on “real-world input from many cloud implementations across IBM” meant to provide guidelines for creating a cloud environment. Update: interview with Heather Kreger, one of the authors of Cloud Computing Reference Architecture.


4 Reasons Why 2013 Will Be The Year of The Innovator
Times change. Innovation is ripe again. Here are four reasons, apart from competitive pressures, why 2013 will see a surge in intelligent, strong ROI-related innovation writing, thinking and action. It could be your opportunity, too.


Data Center Consolidation and Adopting Cloud Computing in 2013
Cloud computing and virtualization continue to have an impact on all consolidation discussions, not only from the standpoint of providing a much better model for managing physical assets, but also in the potential cloud offers to solve disaster recovery shortfalls, improve standardization, and encourage or enable development of service-oriented architectures.


10 Phrases That Can Resolve Any Conflict
One of the biggest mistakes small-business owners make is trying to avoid conflict. This is problematic because in any capitalistic economy, conflict is not only an inevitable but a necessary part of all business. Through the process of “productive conflict,” companies can grow and become more profitable. The key is to be able to resolve them effectively. Authors of Perfect Phrases for Conflict Resolution Lawrence Polsky and Antoine Gerschel describe the perfect phrases to resolve any conflict.


Defrag Tools: #21 - WinDbg - Memory User Mode
In this episode of Defrag Tools, Andrew Richards, Chad Beeder and Larry Larsen continue looking at the Debugging Tools for Windows (in particular WinDbg). WinDbg is a debugger that supports user mode debugging of a process, or kernel mode debugging of a computer.



Quote for the day:

"Planning is bringing the future into the present so that you can do something about it now" -- Alan Lakein