Showing posts with label software architecture. Show all posts
Showing posts with label software architecture. Show all posts

Daily Tech Digest - May 20, 2025


Quote for the day:

"Success is liking yourself, liking what you do, and liking how you do it." -- Maya Angelou


Scalability and Flexibility: Every Software Architect's Challenge

Building successful business applications involves addressing practical challenges and strategic trade-offs. Cloud computing offers flexibility, but poor resource management can lead to ballooning costs. Organizations often face dilemmas when weighing feature richness against budget constraints. Engaging stakeholders early in the development process ensures alignment with priorities. ... Right-sizing cloud resources is essential for software architects, who can leverage tools to monitor usage and scale resources automatically based on demand. Serverless computing models, which charge only for execution time, are ideal for unpredictable workloads and seasonal fluctuations, ensuring organizations only use what they need when needed. .. The next decade will usher in unprecedented opportunities for innovation in business applications. Regularly reviewing market trends and user feedback ensures applications remain relevant. Features like voice commands and advanced analytics are becoming standard as users demand more intuitive interfaces, boosting overall performance and creating new avenues for innovation. Software architects can stay alert and flexible by regularly assessing application performance, user feedback, and market trends to guarantee that systems remain relevant.


Navigating the Future of Network Security with Secure Access Service Edge (SASE)

As businesses expand their digital footprint, cyber attackers increasingly target unsecured cloud resources and remote endpoints. Traditional perimeter-based network and security architectures are not capable of protecting distributed environments. Therefore, organizations must adopt a holistic, future-proof network and cybersecurity architecture to succeed in this rapidly changing business landscape. The ChallengesPerimeter-based security revolves around defending the network’s boundary. It assumes that anyone who has gained access to the network is trusted and that everything outside the network is a potential threat. While this model worked well when applications, data, and users were contained within corporate walls, it is not adequate in a world where cloud applications and hybrid work are the norm. ... ... SASE is an architecture comprising a broad spectrum of technologies, including Zero Trust Network Access (ZTNA), Secure Web Gateway (SWG), Firewall as a Service (FWaaS), Cloud Access Security Brocker (CASB), Data Loss Prevention (DLP), and Software-Defined Wide Area Networking (SD-WAN). Everything is embodied into a single, cloud-native platform that provides advanced cyber protection and seamless network performance for highly distributed applications and users.


Whether AI is a bubble or revolution, how does software survive?

Bubble or not, AI has certainly made some waves, and everyone is looking to find the right strategy. It’s already caused a great deal of disruption—good and bad—among software companies large and small. The speed at which the technology has moved from its coming out party, has been stunning; costs have dropped, hardware and software have improved, and the mediocre version of many jobs can be replicated in a chat window. It’s only going to continue. “AI is positioned to continuously disrupt itself, said McConnell. “It's going to be a constant disruption. If that's true, then all of the dollars going to companies today are at risk because those companies may be disrupted by some new technology that's just around the corner.” First up on the list of disruption targets: startups. If you’re looking to get from zero to market fit, you don’t need to build the same kind of team like you used to. “Think about the ratios between how many engineers there are to salespeople,” said Tunguz. “We knew what those were for 10 or 15 years, and now none of those ratios actually hold anymore. If we are really are in a position that a single person can have the productivity of 25, management teams look very different. Hiring looks extremely different.” That’s not to say there won’t be a need for real human coders. We’ve seen how badly the vibe coding entrepreneurs get dunked on when they put their shoddy apps in front of a merciless internet.


The AI security gap no one sees—until it’s too late

The most serious—and least visible—gaps stem from the “Jenga-style” layering of managed AI services, where cloud providers stack one service on another and ship them with user-friendly but overly permissive defaults. Tenable’s 2025 Cloud AI Risk Report shows that 77 percent of organisations running Google Cloud’s Vertex AI Workbench leave the notebook’s default Compute Engine service account untouched; that account is an all-powerful identity which, if hijacked, lets an attacker reach every other dependent service. ... CIOs should treat every dataset in the AI pipeline as a high-value asset. Begin with automated discovery and classification across all clouds so you know exactly where proprietary corpora or customer PII live, then encrypt them in transit and at rest in private, version-controlled buckets. Enforce least-privilege access through short-lived service-account tokens and just-in-time elevation, and isolate training workloads on segmented networks that cannot reach production stores or the public internet. Feed telemetry from storage, IAM and workload layers into a Cloud-Native Application Protection Platform that includes Data Security Posture Management; this continuously flags exposed buckets, over-privileged identities and vulnerable compute images, and pushes fixes into CI/CD pipelines before data can leak.


5 questions defining the CIO agenda today

CIOs along with their executive colleagues and board members “realize that hacks and disruptions by bad actors are an inevitability,” SIM’s Taylor says. That realization has shifted security programs from being mostly defensive measures to ones that continuously evolve the organization’s ability to identify breaches quickly, respond rapidly, and return to operations as fast as possible, Taylor says. The goal today is ensuring resiliency — even as the bad actors and their attack strategies evolve. ... Building a tech stack that can grow and retract with business needs, and that can evolve quickly to capitalize on an ever-shifting technology landscape, is no easy feat, Phelps and other IT leaders readily admit. “In modernizing, it’s such a moving target, because once you got it modernized, something new can come out that’s better and more automated. The entire infrastructure is evolving so quickly,” says Diane Gutiw ... “CIOs should be asking, ‘How do I change or adapt what I do now to be able to manage a hybrid workforce? What does the future of work look like? How do I manage that in a secure, responsible way and still take advantage of the efficiencies? And how do I let my staff be innovative without violating regulation?’” Gutiw says, noting that today’s managers “are the last generation of people who will only manage people.”


Microsoft just taught its AI agents to talk to each other—and it could transform how we work

Microsoft is giving organizations more flexibility with their AI models by enabling them to bring custom models from Azure AI Foundry into Copilot Studio. This includes access to over 1,900 models, including the latest from OpenAI GPT-4.1, Llama, and DeepSeek. “Start with off-the-shelf models because they’re already fantastic and continuously improving,” Smith said. “Companies typically choose to fine-tune these models when they need to incorporate specific domain language, unique use cases, historical data, or customer requirements. This customization ultimately drives either greater efficiency or improved accuracy.” The company is also adding a code interpreter feature that brings Python capabilities to Copilot Studio agents, enabling data analysis, visualization, and complex calculations without leaving the Copilot Studio environment. Smith highlighted financial applications as a particular strength: “In financial analysis and services, we’ve seen a remarkable breakthrough over the past six months,” Smith said. “Deep reasoning models, powered by reinforcement learning, can effectively self-verify any process that produces quantifiable outputs.” He added that these capabilities excel at “complex financial analysis where users need to generate code for creating graphs, producing specific outputs, or conducting detailed financial assessments.”


Culture fit is a lie: It’s time we prioritised culture add

The idea of culture fit originated with the noble intent of fostering team cohesion. But over time, it has become an excuse to hire people who are familiar, comfortable and easy to manage. In doing so, companies inadvertently create echo chambers—workforces that lack diverse perspectives, struggle to challenge the status quo and fail to innovate. Ankur Sharma, Co-Founder & Head of People at Rebel Foods, understands this well. Speaking at the TechHR Pulse Mumbai 2025 conference, Sharma explained how Rebel Foods moved beyond hiring for cultural likeness. “We are not building a family; we are building a winning team,” he said, emphasising that what truly matters is competency, accountability and adaptability. The problem with culture fit is not just about homogeneity—it’s about stagnation. When teams are made up of individuals who think alike, they lose the ability to see challenges from multiple angles. Companies that prioritise cultural uniformity often struggle to pivot in response to industry shifts. ... Leading organisations are abandoning the notion of culture fit and shifting towards ‘culture add’—hiring employees who bring fresh ideas, challenge existing norms, and contribute new perspectives. Instead of asking, ‘Will this person fit in?’ Hiring managers are asking, ‘What unique value does this person bring?’


Closing security gaps in multi-cloud and SaaS environments

Many organizations are underestimating the risk — especially as the nature of attacks evolves. Traditional behavioral detection methods often fall short in spotting modern threats such as account hijacking, phishing, ransomware, data exfiltration, and denial of service attacks. Detecting these types of attacks require correlation and traceability across different sources including runtime events with eBPF, cloud audit logs, and APIs across both cloud infrastructure and SaaS. ... As attackers adopt stealthier tactics — from GenAI-generated malware to supply chain compromises — traditional signature- and rule-based methods fall short. ... A unified cloud and SaaS security strategy means moving away from treating infrastructure, applications, and SaaS as isolated security domains. Instead, it focuses on delivering seamless visibility, risk prioritization, and automated response across the full spectrum of enterprise environments — from legacy on-premises to dynamic cloud workloads to business-critical SaaS platforms and applications. ... Native CSP and SaaS telemetry is essential, but it’s not enough on its own. Continuous inventory and monitoring across identity, network, compute, and AI is critical — especially to detect misconfigurations and drift. 


AI-Driven Test Automation Techniques for Multimodal Systems

Traditional testing frameworks struggle to meet these demands, particularly as multimodal systems continuously evolve through real-time updates and training. Consequently, AI-powered test automation has emerged as a promising paradigm to ensure scalable and reliable testing processes for multimodal systems. ... Natural Language Processing (NLP)-powered AI tools will understand and define the requirements in a more elaborate and defined structure. This will detect any ambiguity and gaps in requirements. For example, the “System should display message quickly” AI tool will identify the need for a precise definition for the word “quickly.” It looks simple, but if missed, it could lead to great performance issues in production. ... Based on AI-generated requirements and business scenarios, AI-based tools can generate test strategy documents by identifying resources, constraints, and dependencies between systems. All this can be achieved with NLP AI tools ... AI-driven test automation solutions can improve shift-left testing even more by generating automated test scripts faster. Testers can run automation at an early stage when the code is ready to test. AI tools like Chat GPT 4.0 provide script code in any language, like Java or Python, based on simple text input. This uses the NLP (Natural Language Processing) AI model to generate code for automation scripts.


IGA: What Is It, and How Can SMBs Use It?

The first step in a total IGA strategy has nothing to do with software. It actually starts with IT and business leaders determining what the rules of identity governance and behavior should be. The benefit of having a smaller organization is that there are not quite as many stakeholders as in an enterprise. The challenge, of course, is that people, time and resources are limited. IT may have to assume the role of facilitator and earn buy-in. Nevertheless, this is a worthwhile exercise, as it can help establish a platform for secure growth in the future. And again, for SMBs in regulatory-heavy industries — especially finance, healthcare and government contractors — IGA should be a top priority. ... To do this, CIOs should first procure support from key stakeholders by meeting with them individually to explain the need for IGA as an overarching security technology and policy platform for digital security. In these discussions, CIOs can present the long-term benefits of an IGA program that can streamline user identity verification across services while easing audits and automating compliance. ... A strategic roadmap for IGA should involve minimally disruptive business and user adoption and quick technology implementation. One way to do this is to create a phased implementation approach that tackles the most mission-critical and sensitive systems first before extending to other areas of IT.

Daily Tech Digest - May 09, 2025


Quote for the day:

"Create a compelling vision, one that takes people to a new place, and then translate that vision into a reality." -- Warren G. Bennis


The CIO Role Is Expanding -- And So Are the Risks of Getting It Wrong

“We are seeing an increased focus of organizations giving CIOs more responsibility to impact business strategy as well as tie it into revenue growth,” says Sal DiFranco, managing partner of the global advanced technology and CIO/CTO practices at DHR Global. He explains CIOs who are focused on technology only for technology's sake and don’t have clear examples of business strategy and impact are not being sought after. “While innovation experience is important to have, it must come with a strong operational mindset,” DiFranco says. ... He adds it is critical for CIOs to understand and articulate the return on investment concerning technology investments. “Top CIOs have shifted their thinking to a P&L mindset and act, speak, and communicate as the CEO of the technology organization versus being a functional support group,” he says. ... Gilbert says the greatest risk isn’t technical failure, it’s leadership misalignment. “When incentives, timelines, or metrics don’t sync across teams, even the strongest initiatives falter,” he explains. To counter this, he works to align on a shared definition of value from day one, setting clear, business-focused key performance indicators (KPIs), not just deployment milestones. Structured governance helps, too: Transparent reporting, cross-functional steering committees, and ongoing feedback loops keep everyone on track.


How to Build a Lean AI Strategy with Data

In simple terms, Lean AI means focusing on trusted, purpose-driven data to power faster, smarter outcomes with AI—without the cost, complexity, and sprawl that defines most enterprise AI initiatives today. Traditional enterprise AI often chases scale for its own sake: more data, bigger models, larger clouds. Lean AI flips that model—prioritizing quality over quantity, outcomes over infrastructure, and agility over over-engineering. ... A lean AI strategy focuses on curating high-quality, purpose-driven datasets tailored to specific business goals. Rather than defaulting to massive data lakes, organizations continuously collect data but prioritize which data to activate and operationalize based on current needs. Lower-priority data can be archived cost-effectively, minimizing unnecessary processing costs while preserving flexibility for future use. ... Data governance plays a pivotal role in lean AI strategies—but it should be reimagined. Traditional governance frameworks often slow innovation by restricting access and flexibility. In contrast, lean AI governance enhances usability and access while maintaining security and compliance. ... Implementing lean AI requires a cultural shift in how organizations manage data. Focusing on efficiency, purpose, and continuous improvement can drive innovation without unnecessary costs or risks—a particularly valuable approach when cost pressures are increasing.


Networking errors pose threat to data center reliability

“Data center operators are facing a growing number of external risks beyond their control, including power grid constraints, extreme weather, network provider failures, and third-party software issues. And despite a more volatile risk landscape, improvements are occurring.” ... “Power has been the leading cause. Power is going to be the leading cause for the foreseeable future. And one should expect it because every piece of equipment in the data center, whether it’s a facilities piece of equipment or an IT piece of equipment, it needs power to operate. Power is pretty unforgiving,” said Chris Brown, chief technical officer at Uptime Institute, during a webinar sharing the report findings. “It’s fairly binary. From a practical standpoint of being able to respond, it’s pretty much on or off.” ... Still, IT and networking issues increased in 2024, according to Uptime Institute. The analysis attributed the rise in outages due to increased IT and network complexity, specifically, change management and misconfigurations. “Particularly with distributed services, cloud services, we find that cascading failures often occur when networking equipment is replicated across an entire network,” Lawrence explained. “Sometimes the failure of one forces traffic to move in one direction, overloading capacity at another data center.”


Unlocking ROI Through Sustainability: How Hybrid Multicloud Deployment Drives Business Value

One of the key advantages of hybrid multicloud is the ability to optimise workload placement dynamically. Traditional on-premises infrastructure often forces businesses to overprovision resources, leading to unnecessary energy consumption and underutilisation. With a hybrid approach, workloads can seamlessly move between on-prem, public cloud, and edge environments based on real-time requirements. This flexibility enhances efficiency and helps mitigate risks associated with cloud repatriation. Many organisations have found that shifting back from public cloud to on-premises infrastructure is sometimes necessary due to regulatory compliance, data sovereignty concerns, or cost considerations. A hybrid multicloud strategy ensures organisations can make these transitions smoothly without disrupting operations. ... With the dynamic nature of cloud environments, enterprises really require solutions that offer a unified view of their hybrid multicloud infrastructure. Technologies that integrate AI-driven insights to optimise energy usage and automate resource allocation are gaining traction. For example, some organisations have addressed these challenges by adopting solutions such as Nutanix Cloud Manager (NCM), which helps businesses track sustainability metrics while maintaining operational efficiency.


'Lemon Sandstorm' Underscores Risks to Middle East Infrastructure

The compromise started at least two years ago, when the attackers used stolen VPN credentials to gain access to the organization's network, according to a May 1 report published by cybersecurity firm Fortinet, which helped with the remediation process that began late last year. Within a week, the attacker had installed Web shells on two external-facing Microsoft Exchange servers and then updated those backdoors to improve their ability to remain undetected. In the following 20 months, the attackers added more functionality, installed additional components to aid persistence, and deployed five custom attack tools. The threat actors, which appear to be part of an Iran-linked group dubbed "Lemon Sandstorm," did not seem focused on compromising data, says John Simmons, regional lead for Fortinet's FortiGuard Incident Response team. "The threat actor did not carry out significant data exfiltration, which suggests they were primarily interested in maintaining long-term access to the OT environment," he says. "We believe the implication is that they may [have been] positioning themselves to carry out a future destructive attack against this CNI." Overall, the attack follows a shift by cyber-threat groups in the region, which are now increasingly targeting CNI. 


Cloud repatriation hits its stride

Many enterprises are now confronting a stark reality. AI is expensive, not just in terms of infrastructure and operations, but in the way it consumes entire IT budgets. Training foundational models or running continuous inference pipelines takes resources of an order of magnitude greater than the average SaaS or data analytics workload. As competition in AI heats up, executives are asking tough questions: Is every app in the cloud still worth its cost? Where can we redeploy dollars to speed up our AI road map? ... Repatriation doesn’t signal the end of cloud, but rather the evolution toward a more pragmatic, hybrid model. Cloud will remain vital for elastic demand, rapid prototyping, and global scale—no on-premises solution can beat cloud when workloads spike unpredictably. But for the many applications whose requirements never change and whose performance is stable year-round, the lure of lower-cost, self-operated infrastructure is too compelling in a world where AI now absorbs so much of the IT spend. In this new landscape, IT leaders must master workload placement, matching each application to a technical requirement and a business and financial imperative. Sophisticated cost management tools are on the rise, and the next wave of cloud architects will be those as fluent in finance as they are in Kubernetes or Terraform.


6 tips for tackling technical debt

Like most everything else in business today, debt can’t successfully be managed if it’s not measured, Sharp says, adding that IT needs to get better at identifying, tracking, and measuring tech debt. “IT always has a sense of where the problems are, which closets have skeletons in them, but there’s often not a formal analysis,” he says. “I think a structured approach to looking at this could be an opportunity to think about things that weren’t considered previously. So it’s not just knowing we have problems but knowing what the issues are and understanding the impact. Visibility is really key.” ... Most organizations have some governance around their software development programs, Buniva says. But a good number of those governance programs are not as strong as they should be nor detailed enough to inform how teams should balance speed with quality — a fact that becomes more obvious with the increasing speed of AI-enabled code production. ... Like legacy tech more broadly, code debt is a fact of life and, as such, will never be completely paid down. So instead of trying to get the balance to zero, IT exec Rishi Kaushal prioritizes fixing the most problematic pieces — the ones that could cost his company the most. “You don’t want to want to focus on fixing technical debt that takes a long time and a lot of money to fix but doesn’t bring any value in fixing,” says Kaushal


AI Won’t Save You From Your Data Modeling Problems

Historically, data modeling was a business intelligence (BI) and analytics concern, focused on structuring data for dashboards and reports. However, AI applications shift this responsibility to the operational layer, where real-time decisions are made. While foundation models are incredibly smart, they can also be incredibly dumb. They have vast general knowledge but lack context and your information. They need structured and unstructured data to provide this context, or they risk hallucinating and producing unreliable outputs. ... Traditional data models were built for specific systems, relational for transactions, documents for flexibility and graphs for relationships. But AI requires all of them at once because an AI agent might talk to the transactional database first for enterprise application data, such as flight schedules from our previous example. Then, based on that response, query a document to build a prompt that uses a semantic web representation for flight-rescheduling logic. In this case, a single model format isn’t enough. This is why polyglot data modeling is key. It allows AI to work across structured and unstructured data in real time, ensuring that both knowledge retrieval and decision-making are informed by a complete view of business data.


Your password manager is under attack, and this new threat makes it worse

"Password managers are high-value targets and face constant attacks across multiple surfaces, including cloud infrastructure, client devices, and browser extensions," said NordPass PR manager Gintautas Degutis. "Attack vectors range from credential stuffing and phishing to malware-based exfiltration and supply chain risks." Googling the phrase "password manager hacked" yields a distressingly long list of incursions. Fortunately, in most of those cases, passwords and other sensitive information were sufficiently encrypted to limit the damage. ... One of the most recent and terrifying threats to make headlines came from SquareX, a company selling solutions that focus on the real-time detection and mitigation of browser-based web attacks. SquareX spends a great deal of its time obsessing over the degree to which browser extension architectures represent a potential vector of attack for hackers. ... For businesses and enterprises, the attack is predicated on one of two possible scenarios. In the first scenario, users are left to make their own decisions about what extensions are loaded onto their systems. In this case, they are putting the entire enterprise at risk. In the second scenario, someone in an IT role with the responsibility of managing the organization's approved browser and extension configurations has to be asleep at the wheel. 


Developing Software That Solves Real-World Problems – A Technologist’s View

Software architecture is not just a technical plan but a way to turn an idea into reality. A good system can model users’ behaviors and usage, expand to meet demand, secure data and combine well with other systems. It takes the concepts of distributed systems, APIs, security layers and front-end interfaces into one cohesive and easy-to-use product. I have been involved with building APIs that are crucial for the integration of multiple products to provide a consistent user experience to consumers of these products. Along with the group of architects, we played a crucial role in breaking down these complex integrations into manageable components and designing easy-to-implement API interfaces. Also, using cloud services, these APIs were designed to be highly resilient. ... One of the most important lessons I have learned as a technologist is that just because we can build something does not mean we should. While working on a project related to financing a car, we were able to collect personally identifiable information (PII). Initially, we had it stored for a long duration. However, we were unaware of the implications. When we discussed the situation with the architecture and security teams, we found out that we don’t have ownership of the data and it was very risky to store that data for a long period. We mitigated the risk by reducing the data retention period to what will be useful to users. 

Daily Tech Digest - January 30, 2025


Quote for the day:

"Uncertainty is not an indication of poor leadership; it underscores the need for leadership." -- Andy Stanley


Doing authentication right

Like encryption, authentication is one of those things that you are tempted to “roll your own” but absolutely should not. The industry has progressed enough that you should definitely “buy and not build” your authentication solution. Plenty of vendors offer easy-to-implement solutions and stay diligently on top of the latest security issues. Authentication also becomes a tradeoff between security and a good user experience. ... Passkeys are a relatively new technology and there is a lot of FUD floating around out there about them. The bottom line is that they are safe, secure, and easy for your users. They should be your primary way of authenticating. Several vendors make implementing passkeys not much harder than inserting a web component in your application. ... Forcing users to use hard-to-remember passwords means they will be more likely to write them down or use a simple password that meets the requirements. Again, it may seem counterintuitive, but XKCD has it right. In addition, the longer the password, the harder it is to crack. Let your users create long, easy-to-remember passwords rather than force them to use shorter, difficult-to-remember passwords. ... Six digits is the outer limit for OTP links, and you should consider shorter ones. Under no circumstances should you require OTPs longer than six digits because they are vastly harder for users to keep in short-term memory.


Augmenting Software Architects with Artificial Intelligence

Technical debt is mistakenly thought of as just a source code problem, but the concept is also applicable to source data (this is referred to as data debt) as well as your validation assets. AI has been used for years to analyze existing systems to identify potential opportunities to improve the quality (to pay down technical debt). SonarQube, CAST SQG and BlackDuck’s Coverity Static Analysis statically analyze existing code. Applitools Visual AI dynamically finds user interface (UI) bugs and Veracode’s DAST to find runtime vulnerabilities in web apps. The advantages of this use case are that it pinpoints aspects of your implementation that potentially should be improved. As described earlier, AI tooling offers to the potential for greater range, thoroughness, and trustworthiness of the work products as compared with that of people. Drawbacks to using AI-tooling to identify technical debt include the accuracy, IP, and privacy risks described above. ... As software architects we regularly work with legacy implementations that they need to leverage and often evolve. This software is often complex, using a myriad of technologies for reasons that have been forgotten over time. Tools such as CAST Imaging visualizes existing code and ChartDB visualizes legacy data schemas to provide a “birds-eye view” of the actual situation that you face.


Keep Your Network Safe From the Double Trouble of a ‘Compound Physical-Cyber Threat'

Your first step should be to evaluate the state of your company’s cyber defenses, including communications and IT infrastructure, and the cybersecurity measures you already have in place—identifying any vulnerabilities and gaps. One vulnerability to watch for is a dependence on multiple security platforms, patches, policies, hardware, and software, where a lack of tight integration can create gaps that hackers can readily exploit. Consider using operational resilience assessment software as part of the exercise, and if you lack the internal know-how or resources to manage the assessment, consider enlisting a third-party operational resilience risk consultant. ... Aging network communications hardware and software, including on-premises systems and equipment, are top targets for hackers during a disaster because they often include a single point of failure that’s readily exploitable. The best counter in many cases is to move the network and other key communications infrastructure (a contact center, for example) to the cloud. Not only do cloud-based networks such as SD-WAN, (software-defined wide area network) have the resilience and flexibility to preserve connectivity during a disaster, they also tend to come with built-in cybersecurity measures.


California’s AG Tells AI Companies Practically Everything They’re Doing Might Be Illegal

“The AGO encourages the responsible use of AI in ways that are safe, ethical, and consistent with human dignity,” the advisory says. “For AI systems to achieve their positive potential without doing harm, they must be developed and used ethically and legally,” it continues, before dovetailing into the many ways in which AI companies could, potentially, be breaking the law. ... There has been quite a lot of, shall we say, hyperbole, when it comes to the AI industry and what it claims it can accomplish versus what it can actually accomplish. Bonta’s office says that, to steer clear of California’s false advertising law, companies should refrain from “claiming that an AI system has a capability that it does not; representing that a system is completely powered by AI when humans are responsible for performing some of its functions; representing that humans are responsible for performing some of a system’s functions when AI is responsible instead; or claiming without basis that a system is accurate, performs tasks better than a human would, has specified characteristics, meets industry or other standards, or is free from bias.” ... Bonta’s memo clearly illustrates what a legal clusterfuck the AI industry represents, though it doesn’t even get around to mentioning U.S. copyright law, which is another legal gray area where AI companies are perpetually running into trouble.


Knowledge graphs: the missing link in enterprise AI

Knowledge graphs are a layer of connective tissue that sits on top of raw data stores, turning information into contextually meaningful knowledge. So in theory, they’d be a great way to help LLMs understand the meaning of corporate data sets, making it easier and more efficient for companies to find relevant data to embed into queries, and making the LLMs themselves faster and more accurate. ... Knowledge graphs reduce hallucinations, he says, but they also help solve the explainability challenge. Knowledge graphs sit on top of traditional databases, providing a layer of connection and deeper understanding, says Anant Adya, EVP at Infosys. “You can do better contextual search,” he says. “And it helps you drive better insights.” Infosys is now running proof of concepts to use knowledge graphs to combine the knowledge the company has gathered over many years with gen AI tools. ... When a knowledge graph is used as part of the RAG infrastructure, explicit connections can be used to quickly zero in on the most relevant information. “It becomes very efficient,” said Duvvuri. And companies are taking advantage of this, he says. “The hard question is how many of those solutions are seen in production, which is quite rare. But that’s true of a lot of gen AI applications.”


U.S. Copyright Office says AI generated content can be copyrighted — if a human contributes to or edits it

The Copyright Office determined that prompts are generally instructions or ideas rather than expressive contributions, which are required for copyright protection. Thus, an image generated with a text-to-image AI service such as Midjourney or OpenAI’s DALL-E 3 (via ChatGPT), on its own could not qualify for copyright protection. However, if the image was used in conjunction with a human-authored or human-edited article (such as this one), then it would seem to qualify. Similarly, for those looking to use AI video generation tools such as Runway, Pika, Luma, Hailuo, Kling, OpenAI Sora, Google Veo 2 or others, simply generating a video clip based on a description would not qualify for copyright. Yet, a human editing together multiple AI generated video clips into a new whole would seem to qualify. The report also clarifies that using AI in the creative process does not disqualify a work from copyright protection. If an AI tool assists an artist, writer or musician in refining their work, the human-created elements remain eligible for copyright. This aligns with historical precedents, where copyright law has adapted to new technologies such as photography, film and digital media. ... While some had called for additional protections for AI-generated content, the report states that existing copyright law is sufficient to handle these issues.


From connectivity to capability: The next phase of private 5G evolution

Faster connectivity is just one positive aspect of private 5G networks; they are the basis of the current digital era. These networks outperform conventional public 5G capabilities, giving businesses incomparable control, security, and flexibility. For instance, private 5G is essential to the seamless connection of billions of devices, ensuring ultra-low latency and excellent reliability in the worldwide IoT industry, which has the potential to reach $650.5 billion by 2026, as per Markets and Markets. Take digital twins, for example—virtual replicas of physical environments such as factories or entire cities. These replicas require real-time data streaming and ultra-reliable bandwidth to function effectively. Private 5G enables this by delivering consistent performance, turning theoretical models into practical tools that improve operational efficiency and decision-making. ... Also, for sectors that rely on efficiency and precision, the private 5G is making big improvements in this area. For instance, in the logistics sector, it connects fleets, warehouses, and ports with fast, low-latency networks, streamlining operations throughout the supply chain. In fleet management, private 5G allows real-time tracking of vehicles, improving route planning and fuel use. 


American CISOs should prepare now for the coming connected-vehicle tech bans

The rule BIS released is complex and intricate and relies on many pre-existing definitions and policies used by the Commerce Department for different commercial and industrial matters. However, in general, the restrictions and compliance obligations under the rule affect the entire US automotive industry, including all-new, on-road vehicles sold in the United States (except commercial vehicles such as heavy trucks, for which rules will be determined later.) All companies in the automotive industry, including importers and manufacturers of CVs, equipment manufacturers, and component suppliers, will be affected. BIS said it may grant limited specific authorizations to allow mid-generation CV manufacturers to participate in the rule’s implementation period, provided that the manufacturers can demonstrate they are moving into compliance with the next generation. ... Connected vehicles and related component suppliers are required to scrutinize the origins of vehicle connectivity systems (VCS) hardware and automated driving systems (ADS) software to ensure compliance. Suppliers must exclude components with links to the PRC or Russia, which has significant implications for sourcing practices and operational processes.


What to know about DeepSeek AI, from cost claims to data privacy

"Users need to be aware that any data shared with the platform could be subject to government access under China's cybersecurity laws, which mandate that companies provide access to data upon request by authorities," Adrianus Warmenhoven, a member of NordVPN's security advisory board, told ZDNET via email. According to some observers, the fact that R1 is open-source means increased transparency, giving users the opportunity to inspect the model's source code for signs of privacy-related activity. Regardless, DeepSeek also released smaller versions of R1, which can be downloaded and run locally to avoid any concerns about data being sent back to the company (as opposed to accessing the chatbot online). ... "DeepSeek's new AI model likely does use less energy to train and run than larger competitors' models," confirms Peter Slattery, a researcher on MIT's FutureTech team who led its Risk Repository project. "However, I doubt this marks the start of a long-term trend in lower energy consumption. AI's power stems from data, algorithms, and compute -- which rely on ever-improving chips. When developers have previously found ways to be more efficient, they have typically reinvested those gains into making even bigger, more powerful models, rather than reducing overall energy usage."


The AI Imperative: How CIOs Can Lead the Charge

For CIOs, AGI will take this to the next level. Imagine systems that don't just fix themselves but also strategize, optimize and innovate. AGI could automate 90% of IT operations, freeing up teams to focus on strategic initiatives. It could revolutionize cybersecurity by anticipating and neutralizing threats before they strike. It could transform data into actionable insights, driving smarter decisions across the organization. The key is to begin incrementally, prove the value and scale strategically. AGI isn't just a tool; it's a game-changer. ... Cybersecurity risks are real and imminent. Picture this: you're using an open-source AI model and suddenly, your system gets hacked. Turns out, a malicious contributor slipped in some rogue code. Sounds like a nightmare, right? Open-source AI is powerful, but has its fair share of risks. Vulnerabilities in the code, supply chain attacks and lack of appropriate vendor support are absolutely real concerns. But this is true for any new technology. With the right safeguards, we can minimize and mitigate these risks. Here's what I recommend: Regularly review and update open-source libraries. CIOs should encourage their teams to use tools like software composition analysis to detect suspicious changes. Train your team to manage and secure open-source AI deployments. 

Daily Tech Digest - December 29, 2024

AI agents may lead the next wave of cyberattacks

“Many organizations run a pen test on maybe an annual basis, but a lot of things change within an application or website in a year,” he said. “Traditional cybersecurity organizations within companies have not been built for constant self-penetration testing.” Stytch is attempting to improve upon what McGinley-Stempel said are weaknesses in popular authentication schemes of such as the Completely Automated Public Turing test to tell Computers and Humans Apart, or captcha, a type of challenge-response test used to determine whether a user interacting with a system is a human or an bot. Captcha codes may require users to decipher scrambled letters or count the number of traffic lights in an image. ... “If you’re just going to fight machine learning models on the attacking side with ML models on the defensive side, you’re going to get into some bad probabilistic situations that are not going to necessarily be effective,” he said. Probabilistic security provides protections based on probabilities but assumes that absolute security can’t be guaranteed. Stytch is working on deterministic approaches such as fingerprinting, which gathers detailed information about a device or software based on known characteristics and can provide a higher level of certainty that the user is who they say they are.


How businesses can ensure cloud uptime over the holidays

To ensure uptime during the holidays, best practice should include conducting pre-holiday stress tests to identify system vulnerabilities and configure autoscaling to handle demand surges. Experts also recommend simulating failures through chaos engineering to expose weaknesses. Redundancy across regions or availability zones is essential, as is a well-documented incident response plan – with clear escalation paths – “as this allows a team to address problems quickly even with reduced staffing,” says VimalRaj Sampathkumar, technical head – UKI at software company ManageEngine. It’s all about understanding the business requirements and what your demand is going to look like, says Luan Hughes, chief information officer (CIO) at tech provider Telent, as this will vary from industry to industry. “When we talk about preparedness, we talk a lot about critical incident management and what happens when big things occur, but I think you need to have an appreciation of what your triggers are,” she says. ... It’s also important to focus on your people as much as your systems, she adds, noting that it’s imperative to understand your management processes, out-of-hours and on-call rota and how you action support if problems do arise.


Tech worker movements grow as threats of RTO, AI loom

While layoffs likely remain the most extreme threat to tech workers broadly, a return-to-office (RTO) mandate can be just as jarring for remote tech workers who are either unable to comply or else unwilling to give up the better work-life balance that comes with no commute. Advocates told Ars that RTO policies have pushed workers to join movements, while limited research suggests that companies risk losing top talents by implementing RTO policies. ... Other companies mandating RTO faced similar backlash from workers, who continued to question the logic driving the decision. One February study showed that RTO mandates don't make companies any more valuable but do make workers more miserable. And last month, Brian Elliott, an executive advisor who wrote a book about the benefits of flexible teams, noted that only one in three executives thinks RTO had "even a slight positive impact on productivity." But not every company drew a hard line the way that Amazon did. For example, Dell gave workers a choice to remain remote and accept they can never be eligible for promotions, or mark themselves as hybrid. Workers who refused the RTO said they valued their free time and admitted to looking for other job opportunities.


Navigating the cloud and AI landscape with a practical approach

When it comes to AI or genAI, just like everyone else, we started with use cases that we can control. These include content generation, sentiment analysis and related areas. As we explored these use cases and gained understanding, we started to dabble in other areas. For example, we have an exciting use case for cleaning up our data that leverages genAI as well as non-generative machine learning to help us identify inaccurate product descriptions or incorrect classifications and then clean them up and regenerate accurate, standardized descriptions. ... While this might be driving internal productivity, you also must think of it this way: As a distributor, at any one time, we deal with millions of parts. Our supplier partners keep sending us their price books, spec sheets and product information every quarter. So, having a group of people trying to go through all that data to find inaccuracies is a daunting, almost impossible, task. But with AI and genAI capabilities, we can clean up any inaccuracies far more quickly than humans could. Sometimes within as little as 24 hours. That helps us improve our ability to convert and drive business through an improved experience for our customers.


When the System Fights Back: A Journey into Chaos Engineering

Enter chaos engineering — the art of deliberately creating disaster to build stronger systems. I’d read about Netflix’s Chaos Monkey, a tool designed to randomly kill servers in production, and I couldn’t help but admire the audacity. What if we could turn our system into a fighter — one that could take a punch and still come out swinging? ... Chaos engineering taught me more than I expected. It’s not just a technical exercise; it’s a mindset. It’s about questioning assumptions, confronting fears, and embracing failure as a teacher. We integrated chaos experiments into our CI/CD pipeline, turning them into regular tests. Post-mortems became celebrations of what we’d learned, rather than finger-pointing sessions. And our systems? Stronger than ever. But chaos engineering isn’t just about the tech. It’s about the culture you build around it. It’s about teaching your team to think like detectives, to dig into logs and metrics with curiosity instead of dread. It’s about laughing at the absurdity of breaking things on purpose and marveling at how much you learn when you do. So here’s my challenge to you: embrace the chaos. Whether you’re running a small app or a massive platform, the principles hold true. 


Enhancing Your Company’s DevEx With CI/CD Strategies

CI/CD pipelines are key to an engineering organization’s efficiency, used by up to 75% of software companies with developers interacting with them daily. However, these CI/CD pipelines are often far from being the ideal tool to work with. A recent survey found that only 14% of practitioners go from code to production in less than a day when high-performing teams should be able to deploy multiple times a day. ... Merging, building, deploying and running are all classic steps of a CI/CD pipeline, often handled by multiple tools. Some organizations have SREs that handle these functions, but not all developers are that lucky! In that case, if a developer wants to push code where a pipeline isn’t set up — which can be quite recurring with the rise of microservices — they must assemble those rarely-used tools. However, this will disturb the flow state you wish your developers to remain in. ... Troubleshooting issues within a CI/CD pipeline can be challenging for developers due to the need for more visibility and information. These processes often operate as black boxes, running on servers that developers may not have direct access to with software that is foreign to developers. Consequently, developers frequently rely on DevOps engineers — often understaffed — to diagnose problems, leading to slow feedback loops.


How to Architect Software for a Greener Future

Code efficiency is something that the platforms and the languages should make easy for us. They should do the work, because that's their area of expertise, and we should just write code. Yes, of course, write efficient code, but it's not a silver bullet. What about data center efficiency, then? Surely, if we just made our data center hyper efficient, we wouldn't have to worry. We could just leave this problem to someone else. ... It requires you to do some thinking. It also requires you to orchestrate this in some type of way. One way to do this is autoscaling. Let's talk about autoscaling. We have the same chart here but we have added demand. Autoscaling is the simple concept that when you have more demand, you use more resources and you have a bigger box, virtual machine, for example. The key here is very easy to do the first thing. We like to do this, "I think demand is going to go up, provision more, have more space. Yes, I feel safe. I feel secure now". Going the other way is a little scarier. It's actually just as important as compared to sustainability. Otherwise, we end up in the first scenario where we are incorrectly sized for our resource use. Of course, this is a good tool to use if you have a variability in demand. 


Tech Trends 2025 shines a light on the automation paradox – R&D World

The surge in AI workloads has prompted enterprises to invest in powerful GPUs and next-generation chips, reinventing data centers as strategic resources. ... As organizations race to tap progressively more sophisticated AI systems, hardware decisions once again become integral to resilience, efficiency and growth, while leading to more capable “edge” deployments closer to humans and not just machines. As Tech Trends 2025 noted, “personal computers embedded with AI chips are poised to supercharge knowledge workers by providing access to offline AI models while future-proofing technology infrastructure, reducing cloud computing costs, and enhancing data privacy.” ... Data is the bedrock of effective AI, which is why “bad inputs lead to worse outputs—in other words, garbage in, garbage squared,” as Deloitte’s 2024 State of Generative AI in the Enterprise Q3 report observes. Fully 75% of surveyed organizations have stepped up data-life-cycle investments because of AI. Layer a well-designed data framework beneath AI, and you might see near-magic; rely on half-baked or biased data, and you risk chaos. As a case in point, Vancouver-based LIFT Impact Partners fine-tuned its AI assistants on focused, domain-specific data to help Canadian immigrants process paperwork—a far cry from scraping the open internet and hoping for the best.


What Happens to Relicensed Open Source Projects and Their Forks?

Several companies have relicensed their open source projects in the past few years, so the CHAOSS project decided to look at how an open source project’s organizational dynamics evolve after relicensing, both within the original project and its fork. Our research compares and contrasts data from three case studies of projects that were forked after relicensing: Elasticsearch with fork OpenSearch, Redis with fork Valkey, and Terraform with fork OpenTofu. These relicensed projects and their forks represent three scenarios that shed light on this topic in slightly different ways. ... OpenSearch was forked from Elasticsearch on April 12, 2021, under the Apache 2.0 license, by the Amazon Web Services (AWS) team so that it could continue to offer this service to its customers. OpenSearch was owned by Amazon until September 16, 2024, when it transferred the project to the Linux Foundation. ... OpenTofu was forked from Terraform on Aug. 25, 2023, by a group of users as a Linux Foundation project under the MPL 2.0. These users were starting from scratch with the codebase since no contributors to the OpenTofu repository had previously contributed to Terraform.


Setting up a Security Operations Center (SOC) for Small Businesses

In today's digital age, security is not an option for any business irrespective of its size. Small Businesses equally face increasing cyber threats, making it essential to have robust security measures in place. A SOC is a dedicated team responsible for monitoring, detecting, and responding to cybersecurity incidents in real-time. It acts as the frontline defense against cyber threats, helping to safeguard your business's data, reputation, and operations. By establishing a SOC, you can proactively address security risks and enhance your overall cybersecurity posture. The cost of setting up a SOC for a small business may be prohibitive, in which case, the businesses may look at engaging Managed Service Providers for the whole or part of the services. ... Establishing clear, well-defined processes is vital for the smooth functioning of your SOC. NIST Cyber Security Framework could be a good fit for all businesses and one can define the processes that are essential and relevant considering the size, threat landscape and risk tolerance of the business. ... Continuous training and development are essential for keeping your SOC team prepared to handle evolving threats. Offer regular training sessions, certifications, and workshops to enhance their skills and knowledge. 



Quote for the day:

"Hardships often prepare ordinary people for an extraordinary destiny." -- C.S. Lewis

Daily Tech Digest - December 19, 2024

How AI-Empowered ‘Citizen Developers’ Help Drive Digital Transformation

To compete in the future, companies know they need more IT capabilities, and the current supply chain has failed to provide the necessary resources. The only way for companies to fill the void is through greater emphasis on the skill development of their existing staff — their citizens. Imagine two different organizations. Both have explicit initiatives underway to digitally transform their businesses. In one, the IT organization tries to carry the load by itself. There, the mandate to digitize has only created more demand for new applications, automations, and data analyses — but no new supply. Department leaders and digitally oriented professionals initially submitted request after request, but as the backlog grew, they became discouraged and stopped bothering to ask when their solutions would be forthcoming. After a couple of years, no one even mentioned digital transformation anymore. In the other organization, digital transformation was a broad organizational mandate. IT was certainly a part of it and had to update a variety of enterprise transaction systems as well as moving most systems to the cloud. They had their hands full with this aspect of the transformation. Fortunately, in this hypothetical company, many citizens were engaged in the transformation process as well. 


Things CIOs and CTOs Need To Do Differently in 2025

“Because the nature of the threat that organizations face is increasing all the time, the tooling that’s capable of mitigating those threats becomes more and more expensive,” says Logan. “Add to that the constantly changing privacy security rules around the globe and it becomes a real challenge to navigate effectively.” Also realize that everyone in the organization is on the same team, so problems should be solved as a team. IT leadership is in a unique position to help break down the silos between different stakeholder groups. ... CIOs and CTOs face several risks as they attempt to manage technology, privacy, ROI, security, talent and technology integration. According to Joe Batista, chief creatologist, former Dell Technologies & Hewlett Packard Enterprise executive, senior IT leaders and their teams should focus on improving the conditions and skills needed to address such challenges in 2025 so they can continue to innovate. “Keep collaborating across the enterprise with other business leaders and peers. Take it a step further by exploring how ecosystems can impact your business agenda,” says Batista. “Foster an environment that encourages taking on greater risks. The key is creating a space where innovation can thrive, and failures are steppingstones to success.”


5 reasons why 2025 will be the year of OpenTelemetry

OTel was initially targeted at cloud-native applications, but with the creation of a special interest group within OpenTelemetry focused on the continuous integration and continuous delivery (CI/CD) application development pipeline, OTel becomes a more powerful, end-to-end tool. “CI/CD observability is essential for ensuring that software is released to production efficiently and reliably,” according to project lead Dotan Horovits. “By integrating observability into CI/CD workflows, teams can monitor the health and performance of their pipelines in real-time, gaining insights into bottlenecks and areas that require improvement.” He adds that open standards are critical because they “create a common uniform language which is tool- and vendor-agnostic, enabling cohesive observability across different tools and allowing teams to maintain a clear and comprehensive view of their CI/CD pipeline performance.” ... The explosion of interest in AI, genAI, and large language models (LLM) is creating an explosion in the volume of data that is generated, processed and transmitted across enterprise networks. That means a commensurate increase in the volume of telemetry data that needs to be collected in order to make sure AI systems are operating efficiently.


The Importance of Empowering CFOs Against Cyber Threats

Today's CFOs must be collaborative leaders, willing to embrace an expanding role that includes protecting critical assets and securing the bottom line. To do this, CFOs must work closely with chief information security officers (CISOs), due to the sophistication and financial impact of cyberattacks. ... CFOs are uniquely positioned to understand the potential financial devastation from cyber incidents. The costs associated with a breach extend beyond immediate financial losses, encompassing longer-term repercussions, such as reputational damage, legal liabilities, and regulatory fines. CFOs must measure and consider these potential financial impacts when participating in incident response planning. ... The regulatory landscape for CFOs has evolved significantly beyond Sarbanes-Oxley. The Securities and Exchange Commission's (SEC's) rules on cybersecurity risk management, strategy, governance, and incident disclosure have become a primary concern for CFOs and reflect the growing recognition of cybersecurity as a critical financial and operational risk. ... Adding to the complexity, the CFO is now a cross-functional collaborator who must work closely with IT, legal, and other departments to prioritize cyber initiatives and investments. 


Community Banks Face Perfect Storm of Cybersecurity, Regulatory and Funding Pressures

Cybersecurity risks continue to cast a long shadow over technological advancement. About 42% of bankers expect cybersecurity risks to pose their most difficult challenge in implementing new technologies over the next five years. This concern is driving many institutions to take a cautious approach to emerging technologies like artificial intelligence. ... Banks express varying levels of satisfaction with their technology services. Asset liability management and interest rate risk technologies receive the highest satisfaction ratings, with 87% and 84% of respondents respectively reporting being “extremely” or “somewhat” satisfied. However, workflow processing and core service provider services show room for improvement, with less than 70% of banks expressing satisfaction with these areas. ... Compliance costs continue to consume a significant portion of bank resources. Legal and accounting/auditing expenses related to compliance saw notable increases, with both categories rising nearly 4 percentage points as a share of total expenses. The implementation of the current expected credit loss (CECL) accounting standard has contributed to these rising costs.


Dark Data Explained

Dark data often lies dormant and untapped, its value obscured by poor quality and disorganization. Yet within these neglected reservoirs of information lies the potential for significant insights and improved decision-making. To unlock this potential, data cleaning and optimization become vital. Cleaning dark data involves identifying and correcting inaccuracies, filling in missing entries, and eliminating redundancies. This initial step is crucial, as unclean data can lead to erroneous conclusions and misguided strategies. Optimization furthers the process by enhancing the usability and accessibility of the data. Techniques such as data transformation, normalization, and integration play pivotal roles in refining dark data. By transforming the data into standardized formats and ensuring it adheres to consistent structures, companies and researchers can more effectively analyze and interpret the information. Additionally, integration across different data sets and sources can uncover previously hidden patterns and relationships, offering a comprehensive view of the phenomenon being studied. By converting dark data through meticulous cleaning and sophisticated optimization, organizations can derive actionable insights and add substantial value. 


In potential reversal, European authorities say AI can indeed use personal data — without consent — for training

The European Data Protection Board (EDPB) issued a wide-ranging report on Wednesday exploring the many complexities and intricacies of modern AI model development. It said that it was open to potentially allowing personal data, without owner’s consent, to train models, as long as the finished application does not reveal any of that private information. This reflects the reality that training data does not necessarily translate into the information eventually delivered to end users. ... “Nowhere does the EDPB seem to look at whether something is actually personal data for the AI model provider. It always presumes that it is, and only looks at whether anonymization has taken place and is sufficient,” Craddock wrote. “If insufficient, the SA would be in a position to consider that the controller has failed to meet its accountability obligations under Article 5(2) GDPR.” And in a comment on LinkedIn that mostly supported the standards group’s efforts, Patrick Rankine, the CIO of UK AI vendor Aiphoria, said that IT leaders should stop complaining and up their AI game. “For AI developers, this means that claims of anonymity should be substantiated with evidence, including the implementation of technical and organizational measures to prevent re-identification,” he wrote, noting that he agrees 100% with this sentiment. 


Software Architecture and the Art of Experimentation

While we can’t avoid being wrong some of the time, we can reduce the cost of being wrong by running small experiments to test our assumptions and reverse wrong decisions before their costs compound. But here time is the enemy: there is never enough time to test every assumption and so knowing which ones to confront is the art in architecting. Successful architecting means experimenting to test decisions that affect the architecture of the system, i.e. those decisions that are "fatal" to the success of the thing you are building if you are wrong. ... If you don’t run an experiment you are assuming you already know the answer to some question. So long as that’s the case, or so long as the risk and cost of being wrong is small, you may not need to experiment. Some big questions, however, can only be answered by experimenting. Since you probably can’t run experiments for all the questions you have to answer, implicitly accepting the associated risk, so you need to make a trade-off between the number of experiments you can run and the risks you won’t be able to mitigate by experimenting. The challenge in creating experiments that test both the MVP and MVA is asking questions that challenge the business and technical assumptions of both stakeholders and developers. 


5 job negotiation tips for CAIOs

As you discuss base, bonus, and equity, be specific and find out exactly what their pay range actually is for this emerging role and how that compares with market rates for your location. For example, some recruiters may give you a higher number early on in discussions, and then once you’re well bought-in to the company after several interviews, the final offer may throttle things back. ... Set clear expectations early, and be prepared to withdraw your candidacy if any downward-revised amount later on is too far below your household needs. ... As a CAIO, you don’t want to be measured the same as the lines of business, or penalized if they fall short of quarterly or yearly sales targets. Ensure your performance metrics are appropriate for the role and the balance you’ll need to strike between near-term and longer-term objectives. For certain, AI should enable near-term productivity improvements and cost savings, but it should also enable longer-term revenue growth via new products and services, or enhancements to existing offerings. ... Companies sometimes place a clause in their legal agreement that states they own all pre-existing IP. Get that clause removed and itemize your pre-existing IP if needed to ensure it stays under your ownership. 


Leadership skills for managing cybersecurity during digital transformation

First, security must be top of mind as all new technologies are planned. As you innovate, ensure that security is built into deployments, and options chosen that match your business risk profile and organization’s values. For example, consider enabling the max security features that come with many IoT, such as forcing the change of default passwords, patching devices and ensuring vulnerabilities can be addressed. Likewise, ensure that AI applications are ethically sound, transparent, and do not introduce unintended biases. Second, a comprehensive risk assessment should be performed on the current network and systems environment as well as on the future planned “To Be” architecture. ... Digital transformation also demands leaders who are not only technically adept but also visionary in guiding their organizations through change. Leaders must be able to inspire a digital culture, align teams with new technologies, and drive strategic initiatives that leverage digital capabilities for competitive advantage. Finally, leaders must be life-long learners who constantly update their skills and forge strong relationships across their organzation for this new digitally-transformed environment.



Quote for the day:

"Don’t watch the clock; do what it does. Keep going." -- Sam Levenson