Showing posts with label tech talks. Show all posts
Showing posts with label tech talks. Show all posts

Daily Tech Digest - November 10, 2025


Quote for the day:

"You can only lead others where you yourself are willing to go." -- Lachlan McLean



CISOs must prove the business value of cyber — the right metrics can help

With a foundational ERM program, and by aligning metrics to business priorities, cybersecurity leaders can ultimately prove the value of the cyber security function. Useful metrics examples in business terms include maturity, compliance, risk, budget, business value streams, and status of SecDevOps (shifting left) adoption, Oberlaender explains. But how does a cybersecurity expert learn what’s important to the business? ... “Boards are faced with complex matters such as impact on interest rates, tariffs, stock price volatility, supply chain issues, profitability, and acquisitions. Then the CISO enters the boardroom with their MITRE Attack framework, patching metrics and NIST maturity models,” Hetner continues. “These metrics are not aligned to what the board is conditioned to reviewing.” ... Rather than just asking “are we secure?” business leaders are asking what metrics their cyber components are using to measure and quantify risk and how they’re spending against those risks. For CISO’s, this goes beyond measuring against frameworks such as NIST, listing a litany of security vulnerabilities they patched, or their mean time to response. “Instead, we can say, ‘This is our potential financial exposure’,” Nolen explains. “So now you’re talking dollars and cents rather than CVEs and technical scores that board members don’t care about. What they care about is the bottom line.” 


Feeding the AI beast, with some beauty

AI-driven growth is placing an unprecedented load on data centres worldwide, and India is poised to shoulder a large share of the incremental electricity, real estate, and cooling burden created by rising AI demand. The IEA has estimated a trajectory that AI is accelerating at a rapid pace. Under realistic scenarios, AI workloads alone could require on the order of 1–1.5 GW of continuous IT power—equivalent to 8.8–13 TWh annually—in India by 2030. This translates into a significant new draw on grids, water resources, and capex for cooling and power infrastructure. Recent analyses indicate that while AI’s share of data centre power today stands in the single-digit to low-teens range, it could climb to 20–40 per cent or more by 2030 in some scenarios, fundamentally reshaping the power-consumption profile of digital infrastructure. ... As data centres grow in scale, sustainability is becoming a competitive differentiator—and that’s where Life Cycle Assessments (LCAs) and Environmental Product Declarations (EPDs) play a critical role. An LCA is a systematic method for evaluating the total environmental impact of a product, process, or system across its entire life cycle. For a data centre, this spans both upstream (embodied) impacts—such as construction materials, IT equipment manufacturing, and cooling and power infrastructure including gensets—as well as operational impacts like electricity consumption. 


8 IT leadership tips for first-time CIOs

Generally speaking, the first three years can make or break your IT leadership career, given that digital leaders globally tend to stay at one company for just over that length of time on average, according to the 2025 Nash Squared Digital Leadership Report. CIOs looking to sidestep that statistic are taking intentional measures, ensuring they get early wins, and perhaps most importantly, not coming into their role with preconceived ideas about how to lead or assuming what worked in a past job can be replicated. ... The CTO of staffing and recruiting firm Kelly says that “building momentum, finding ways to get quick wins from the low hanging fruit” will help build credibility with the leadership team. Then, you can parlay those into bigger wins and avoid spinning out, he says. ... While making connections and establishing relationships is critical, Lewis stresses the importance of not rushing to change things right away when you’re new to the job. “Let it set for a while,” he says. ... This is especially true of midsize and larger midsize organizations “where the clarity of strategy and clarity of what’s important … isn’t always well documented and well thought out,” Rosenbaum says. Knowing the maturity of your organization is really important, he says. “Some CIO roles are just about keeping the lights on, making sure security is good at a lower level. As the company starts to mature, they start thinking about technology as an enabler, and to that end, they start having maybe a more unified technology strategy.”


Drata’s VP of Data on Rethinking Data Ops for the AI Era: Crawl, Walk, Run — Then Sprint

While GenAI may be the shiny new tool, Solomon makes it clear that foundational work around ingestion and transformation is far from trivial. “We live and die by making sure that all the data has been ingested in a fresh manner into the data warehouse,” he explains. He describes the “bread and butter” of the team: synchronizing thousands of MySQL databases from a single-tenant production architecture into the warehouse — closer to real-time. “We do a lot of activities with regard to the CDC pipeline, which is just like driving terabytes of data per day.” But the data team isn’t working in isolation. GTM executives return from conferences excited about GenAI. ... Rather than building fully-fledged pipelines from day one, the team prioritizes quick feedback loops — using sandboxes, cloud notebooks, or Streamlit apps to test hypotheses. Once business impact is validated, the team gradually introduces cost tracking, governance, and scalability. If a stakeholder’s hypothesis lacks merit, there is no point in building complex data pipelines, governance frameworks, or cost-tracking systems. This shift in mindset, he explains, is something many data teams are grappling with today. Traditionally, data teams were trained to focus on building scalable, robust pipelines from day one — often requiring significant upfront effort. But this often led to cost inefficiencies and delays.


Model Context Protocol Servers: Build or Buy?

"The tension lies in whether you have the sustained capacity to keep pace with protocols that are still being debated by their maintainers," said Rishi Bhargava, co-founder at Descope, a customer and agentic IAM platform. "Are you prepared to build the plane while it's flying, or would you rather upgrade a finished plane mid-flight?" ... "From a business perspective, the build versus buy decision for MCP servers boils down to strategic priorities and risk appetite," Jain said. Building MCP servers in-house gives you "complete control," but buying provides "speed, reliability, and lower operational burden," he said. But others think there's no reason to rush your decision. ... "Most companies shouldn't be doing either yet," he said, explaining that companies should first focus on the specific business goals they are trying to achieve, rather than on which existing applications they think should have AI features added. "Build when you have an actual AI application that requires custom data integration and you understand exactly what intelligence you're trying to deploy. If you're simply connecting ChatGPT to your CRM, you don't need MCP at all," Prywata said. ... "It is usually best to build [MCP servers] in-house when compliance, performance tuning, or data sovereignty are key priorities for the business," said Marcus McGehee, founder at The AI Consulting Lab. 


Every CIO Fails; The Smart Ones Admit It

There's a "hero CIO" myth deeply rooted in our mindset - the idea that you're the person who makes technology work, no matter what. Admitting failure feels like admitting incompetence, especially in boardrooms where few understand the complexity of IT. Organizational incentives also discourage openness. Many companies punish failure more than they reward learning. I've seen talented CIOs denied promotion because of a single delayed project, even when their broader portfolio delivered value. When institutional memory focuses on what went wrong rather than what was learned, people stop taking risks. The second factor is C-suite politics. In some environments, transparency becomes ammunition. Another team might use a project delay to justify requests for budget increases or to exert influence. And finally, CIOs worry about vendor perception, admitting setbacks could impact pricing, support or their reputation with partners. ... Build your transparency muscle in peacetime, not when something is on fire. By the time a crisis hits, it's too late to establish credibility. Make transparency habitual. Share work in progress, not just results. Celebrate learning, not perfection. Run "pre-mortems" where you assume a project failed and work backwards to identify what could go wrong. And when you make a mistake, own it publicly. The honesty earns you more trust than a polished explanation ever will.


6 proven lessons from the AI projects that broke before they scaled

In analyzing dozens of AI PoCs that sailed on through to full production use — or didn’t — six common pitfalls emerge. Interestingly, it’s not usually the quality of the technology but misaligned goals, poor planning or unrealistic expectations that caused failure. ... Define specific, measurable objectives upfront. Use SMART criteria. For example, aim for “reduce equipment downtime by 15% within six months” rather than a vague “make things better.” Document these goals and align stakeholders early to avoid scope creep. ... Invest in data quality over volume. Use tools like Pandas for preprocessing and Great Expectations for data validation to catch issues early. Conduct exploratory data analysis (EDA) with visualizations (like Seaborn) to spot outliers or inconsistencies. Clean data is worth more than terabytes of garbage. ... Start simple. Use straightforward algorithms like random forest or XGBoost from scikit-learn to establish a baseline. Only scale to complex models — TensorFlow-based long-short-term-memory (LSTM) networks — if the problem demands it. Prioritize explainability with tools like SHAP  to build trust with stakeholders. ... Plan for production from day one. Package models in Docker containers and deploy with Kubernetes for scalability. Use TensorFlow Serving or FastAPI for efficient inference. Monitor performance with Prometheus and Grafana to catch bottlenecks early. Test under realistic conditions to ensure reliability.


Andela CEO talks about the need for ‘borderless talent’ amid work visa limitation

Globally, three of four IT employers say they lack the tech talent they need, and the outlook will only get more dire as AI creates a demand for high-skilled specialists like data engineers, senior architects, and agentic orchestrators. Visa programs aren’t designed by the laws of supply and demand. They’re defined by policy makers and are updated infrequently. So, they’ll never truly be in sync with the needs of the labor market. ... Brilliant people exist around the world. It’s why they want to sponsor people for H-1B visas. But hiring outside of those traditional pathways — to work with a brilliant machine learning engineer from Cairo or São Paulo, for example — is…a long, painful process that takes months and is inaccessible to them. They don’t know that they can find the right partner, someone who has sorted this all out and vetted talent and developed compliance with global labor and tax laws, etc. Once they understand that those partners exist, the global workforce becomes instantly accessible to them. ... Technical hiring still feels like a gamble, even though software development is, relatively speaking, packed with deterministic skills. There are two main problems. One problem is the data problem. There’s not enough reliable data about what a job actually requires and what a worker is capable of doing. Today, we rely on resumes and job descriptions. 


The Overwhelm Epidemic: Why Resilience Begins with You

People have so much to do and not enough time. There’s nothing new with the phenomena of not enough time to do what needs to be done, but today it’s different. Today, it’s unique because this feeling of overwhelm has been continuously expanding since early 2020 as we experienced the pandemic. We’re being overwhelmed to an extent most people are not experienced to deal with.
For you in operational resilience, I believe self-care is more critical now than it has ever been. You are only able to help your clients and their systems be resilient to the extent you are taking care of yourself and are resilient. ... Most say something like, “I’m going to double down and focus on this. I’m going to work harder and spend as much time as needed, even if it means cutting into my already precious personal time.” They think working harder is the best approach, but here’s the thing—they are wrong.
When you are operating at high-stress levels, introducing more stress by doubling down and working harder, actually reduces your output. ... Bottom line, a thriving, elite mindset is the foundation of personal wellbeing and professional success. 
Turning to positive psychology, underlying Martin Seligman‘s model for human flourishing, are 24 positive character strengths. While more research is still needed, the research to date has concluded that of the 24, the best predictor of living a flourishing, thriving life is gratitude.


Ask a Data Ethicist: What Are the Impacts of AI on Creativity, Schools, and Industry?

Generally speaking, if the goal is to reduce the cost of labour by replacing it with equipment (capital – or AI), then assuming the AI tool replaces the labour in a way that is acceptable to drive the desired outputs the business could possibly drive more profit. So that might be construed as positive for the business. However, businesses exist in the bigger context of society. To take an extreme example, if a large section of the population loses their jobs, they can’t buy your products, and that could hurt your organization. It also puts more burdens on society for a social safety net, perhaps resulting in tax increases or some other impacts to business to pay for those services. ... I think it’s important to disclose the use of AI in a process. For video, audio or images – a symbol or some text to say “AI generated” can accomplish that goal. There is also watermarking that content which is a more technical method. For text, it’s trickier. I don’t think everyone needs to be told about every instance of a spellchecker (to use an extreme example) but if the whole thing is generated, then it is important to say that. This is where a policy can be helpful. For example, one might apply the 80/20 rule – if less than 20% is generated, perhaps it’s not necessary to disclose it. That said, there better not be any inaccuracies or errors in the content if you choose NOT to disclose it. See this case in Australia. This is an example of why I think disclosing, overall, is a good idea.

Daily Tech Digest - May 16, 2025


Quote for the day:

"Different times need different types of leadership." -- Park Geun-hye


AI Agents: Protocols Driving Next-Gen Enterprise Intelligence

MCP substantially simplifies agentic AI adoption for developers. This roadmap created by the MCP community clearly defines priorities and direction, providing helpful guidance for implementation. Organizations will also benefit from the key initiatives outlined in the roadmap, like the MCP Registry, which enables developers to build a comprehensive network of agents. The emergence of OAuth as a complementary standard protocol strengthens agent ecosystems even more. As with any other framework, MCP has its challenges. MCP offers a wide array of tools to support LLM reasoning, but it doesn’t prioritize coordinated, high-quality task execution. ... ACP will make it easier to implement AI agents on edge and local devices. In instances where the majority of decision-making happens “on the go” in a disconnected environment, this protocol will be useful. Now, developers can build modular systems that can coordinate with a standard protocol to make edge AI easier. A2A will gain momentum and enable cross-platform agents to work together to deliver superior intelligence to customers. A2A will help coordinate agents built using diverse frameworks with a common standard. The main requirement for this is to build an Agent Card that allows agents to be used and consumed by others.


Critical Infrastructure Under Siege: OT Security Still Lags

Industrial organizations and other kinds of critical infrastructure are regularly near or at the top of vendor lists highlighting ransomware targets. It's easy to see why; the important assets a threat actor could compromise put immense pressure on affected organizations to pay up. Kurt Gaudette, vice president of intelligence and services at Dragos, tells Dark Reading that the OT side of the house is "where the bottom line is." And indeed, Sophos reported last year that 65% of respondent organizations in the manufacturing sector reported that they suffered a ransomware attack in the year preceding the report; of those, 62% of organizations paid the ransom. Compounding this, the security postures of organizations that use OT/ICS can vary dramatically compared with traditional IT settings. The importance of staying patched is complicated by the reality that some industrial processes are meant to run uninterrupted for long periods of time and can't be subjected to the downtime necessary to patch. Second, an organization like a local water treatment plant might not have a significant security budget to invest in tools and personnel. Also, ICS products tend to be expensive, and aging equipment is everywhere, with many fields like healthcare drowning in legacy, hard-to-patch products or those without built-in security features.


Your Security Training Isn't Wrong. The Content Is Just Outdated

Although AI makes threats harder to detect, many breaches aren't caused by sophisticated hacking. They happen because organizations might not realize employees let their kids play Minecraft on their corporate laptops, or an old server or forgotten IoT device is still online. If IT doesn't know an asset exists, or who uses it, the team can't secure it, and hackers look for forgotten, unmonitored devices to break in. ... Managing and securing multiple systems can tempt employees to repeat passwords for simplicity. If employees continue to avoid using tools like corporate password managers to enforce strong, unique passwords, IT teams need to ask themselves why. How can they make warnings about this more impactful without burdening staff? ... The trouble is that, even with corporate password managers and MFA in place, hackers are still finding ways to steal credentials. These tools are designed to prevent hackers from entering your home, but if the door is left open, they won't stop anyone from walking in. The average annual growth rate of exposed accounts is 28%. Session expiration policies based on risk level and adaptive access policies can trigger forced signouts if a session shows abnormal behavior (e.g., logging in from a new IP while still active on another), which will help reduce account session takeovers.


Check Point CISO: Network segregation can prevent blackouts, disruptions

In 2025, industry watchers expect there will be an increase in the public budget allocated to defense. In Spain, one-third of the budget will be allocated to increasing cybersecurity. But for Fischbein, training teams is much more important than the budget. “The challenge is to distribute the budget in a way that can be managed,” he notes, and to leverage intuitive and easy-to-use platforms, so that organizations don’t have to invest all the money in training. “When you have information, management, users, devices, mobiles, data centers, clouds, cameras, printers… the security challenge is very complex. ” he says. ” ... “In a security operations center (SOC), a person using Check Point tools could previously take between two and four hours to investigate the causes of an alert. Today that time has dropped to 20 minutes,” he says. He also explains how they work with vulnerabilities. “Currently, Check Point checks all of them in a few seconds and tells you whether you are protected or not. And if you are not, it tells you which network to protect.” Regarding attackers, he acknowledges that they now make “richer and more logical” attacks. “With AI, they check the data and social networks of any person to impersonate a friend of the attacked person, because when someone receives something more personal they lower the defenses against phishing,” he says.


The Future (and Past) of Child Online Safety Legislation: Who Minds the Implementation Gap?

Acknowledging the limitations of exclusively using ID as a form of verification, many state bills, including Montana, Louisiana, Arkansas, Utah, and New York, have left the door open for “commercially reasonable” age verification methods. However, they give very little clarification as to what should be considered “commercially reasonable”. For example, in Utah, they only specify that these options can, “[rely] on public or private transactional data to verify the age of the person attempting to access the material.” ... Throughout all of these bills, there is no insight as to what type of data is permissible, how this data should be sourced, or any consent mechanisms for leveraging the data. By leaving a loophole open for undefined measures of age verification, there is a risk of potentially invasive and privacy-violating data, such as biometric data, being required of everyone who intends to access social media platforms. Not only could this potentially compromise people’s ability to remain anonymous on the internet, but it could also lead to the consolidation of uniquely identifiable sensitive data within the entities performing these verifications. To combat this, all bills with specifications for commercially reasonable age verification methods prohibit the data being used for verification from being stored or retained after verification is complete.


Beyond Code Coverage: A Risk-Driven Revolution in Software Testing With Machine Learning

Risk-based testing measures the importance of criteria instead of conducting equal checks for every factor. It evaluates potential flaws based on failure impact, likelihood of failure, and business criticality. This approach ensures efficient resource management and improves software reliability by: Focusing on Critical Areas: Instead of testing everything equally, RBT ensures that high-risk components receive the most attention. Evaluating Failure Impact: Identifies and tests areas where defects could cause significant damage. Assessing Likelihood of Failure: Targets unstable parts of the software by analyzing complexity, frequent changes, and past defects. Prioritizing Business-Critical Functions: Ensures essential systems like payment processing remain stable and reliable. Optimizing Resources and Time: Reduces unnecessary testing efforts, allowing teams to focus on what matters most. Improving Software Dependability: Detects major issues early, leading to more stable and reliable software. ... Machine learning improves software testing by examining prior data (code changes, bug reports, and test results) to identify high-risk locations. It gives key tests top priority; it finds anomalies before failures start; it keeps getting better with fresh data. Automating risk assessment helps ML speed tests, improve accuracy, maximize resources, and make software testing smarter and more effective.


Integrating Cybersecurity Into Change Management for Critical Infrastructure

The cyber MOC specifically targets changes affecting connected and configurable technologies, such as PLCs, IIoT devices, and network switches. The specific implementation of this process will vary depending on the organization’s structure and operational needs, as will the composition of the teams responsible for its execution. The reality is that many existing MOC frameworks were conceived before cybersecurity became a critical concern. Consequently, they often prioritize physical safety, leaving a significant gap in addressing potential cyber vulnerabilities. Traditional MOC tools, designed to support these processes, lack the necessary mechanisms to evaluate changes that could compromise cybersecurity. This oversight is a significant risk, particularly as infrastructure organizations become increasingly reliant on interconnected technologies. To bridge this gap, a fundamental shift is required. MOC tools and workflows must be revamped to incorporate cybersecurity considerations. While preserving core data fields and attributes, new fields must be introduced to capture cyber-related information. Similarly, RACI (responsible, accountable, consulted, and informed) matrices, which define responsibilities, must be expanded to include cyber risk accountability.


Deepfake attacks could cost you more than money

Treat deepfakes like any other cyber threat and apply a zero-trust mindset. That means don’t assume anything is real just because it looks or sounds convincing. Update your response plan to include steps for verifying video or audio content, especially if it’s being used to request sensitive actions. Build a risk model that considers how deepfakes could be used to target critical business processes, such as executive communications, financial approvals, or customer interactions. Make sure your team knows how to spot red flags, who to alert, and how to document the incident. Use detection tools that can scan media in real time and save flagged content for review. The faster you can identify and act, the more damage you can prevent. In today’s environment, it’s safer to question first and trust only after you verify. ... Deepfake awareness should be built into regular training so employees can spot warning signs early. Utilizing the detection tools to support teams by scanning and flagging suspicious media in real time, helping them make faster, safer decisions. Incident response plans must also cover how to escalate, preserve evidence, and communicate if a deepfake is suspected. At the end of the day, questioning unusual communications must become the norm, not the exception


Secure Code Development News to Celebrate

Another big payoff comes from paying down security debt. Wysopal said organizations with the most mature secure development practices fix 10% of their vulnerabilities on an annual basis and avoid having any security debt that is more than a year old. By contrast, "the lagging companies fix less than 1% of open bugs per month," he said. This strategy isn't always feasible. Notably, "we found that 70% of critical debt was in third-party code," and teams that built software with third-party - or sometimes fourth or fifth party - dependencies sometimes must wait months for fixes to become available, Wysopal said. "Some software packages that are widely used by other software packages are harder to fix, so you have a lot what we call transitive dependencies." There's no easy solution for this challenge. "When you're using open source, you're really dependent on the fixing speed of another team that is not getting paid, and they're just doing it because they love to do that project," he said. ... Another wrinkle is that more code is built by artificial intelligence tools - Google and Microsoft each say roughly a third of their code is AI-generated. Developers report being more productive, shipping on average 50% more code when they use AI tools. Wysopal said such AI tools appear to produce code with vulnerabilities at the same rate as classical development tools. More code shipped risks a greater number of vulnerabilities.


Powering the AI revolution: Legal and infrastructure challenges for data center development

Developing and operating AI-ready data centers necessitates specialized legal expertise across multiple disciplines. Financing attorneys provide guidance in structuring capital arrangements that support data center development, which requires substantial upfront investment before generating any operational revenue. Capital arrangements must incorporate sufficient flexibility to accommodate the rapid evolution of AI technology availability and unique power supply challenges at an individual site. Energy lawyers guide PPA negotiations, facilitate utility discussions, manage interconnection filings with relevant authorities, and resolve rate disputes when they arise. Their specialized work ensures that facilities maintain access to reliable, cost-effective power resources that meet operational requirements under all anticipated conditions. As regulatory approaches to AI infrastructure continue to evolve, energy counsel must remain current on emerging policies and their potential impact on both existing and future facilities. Technology and intellectual property specialists address essential operational aspects of data centers, including complex licensing arrangements, service level agreements, comprehensive data governance frameworks, and cross-border data flow compliance strategies.

Daily Tech Digest - June 08, 2024

Understanding Security's New Blind Spot: Shadow Engineering

Shadow engineering leaves security teams with little or no control over LCNC apps that citizen developers can deploy. These apps also bypass the usual code tests designed to flag software vulnerabilities and misconfigurations, which could lead to a breach. This lack of visibility prevents organizations from enforcing policies to keep them in compliance with corporate or industry security standards. ... LCNC apps have many of the same problems found in conventionally developed software, such as hard-coded or default passwords and leaky data. A simple application asking employees for their T-shirt size for a company event could give hackers access to their HR files and protected data. LCNC apps should routinely be evaluated for threats and vulnerabilities, so they can be detected and remediated. ... Give citizen developers guidance in easy-to understand terms to help them remediate risks themselves as quickly and easily as possible. Collaborate with business developers to ensure that security is integrated into the development process of LCNC applications going forward.


‘Technology must augment humanity’: An interview with former IBM CEO Ginni Rometty

While we can't control disruptions, we can control our outlook on the future. Leaders must instill confidence in their teams, emphasising the inevitability of change and the collective ability to find positive solutions. Honesty is a form of optimism, so be honest with yourself and your teams about the issues at hand, resisting attempts to ignore or minimise them. ... Problem-solving is at the core of leadership, so leaders should be unafraid to ask questions, seek insights from others, and involve their teams and wider network in finding solutions. Remember, you do not have to tackle everything alone or have all the answers. When I face a complex problem, I dissect it into manageable pieces and think through each disparate part. ... The right relationships in your life, personal and professional, provide perspective and ideas which is essential for progress. Building a robust network—from friends and family to colleagues and industry peers—provides support and inspiration to maintain optimism and courage amid disruption. The more diverse your network, the more people you can call on to fuel your optimism and courage in the face of disruption.


How Cybersecurity and Sustainability Intersect

Cybersecurity and sustainability are discrete functions in many enterprises, yet they could benefit greatly from being de-siloed. Sustainability and cybersecurity initiatives need C-suite awareness and resources to permeate an enterprise’s culture and actually achieve their goals. “It's not a one-person show anymore. It's really an ownership in that responsibility and a stewardship that cuts across functional leadership across … the entire organization,” says Lynch. In more mature organizations, cybersecurity already has board-level involvement, which can make it easier to see and act on its intersection with sustainability. But for many organizations, cybersecurity and sustainability are separate and even back-office functions. “The cybersecurity leader should not wait for someone to come [and] invite them into these conversations,” says Govindankutty. The stakeholders who need to be involved in cybersecurity and sustainability extend beyond an enterprise’s four walls. Third-party vendors are a vital part of an enterprise’s ecosystem.


Flipping The Script On Startup Success

The first step is to identify the narrowly defined vertical market segments that the company will focus on. The second step is to find a lighthouse customer or two to focus all the team’s attention on to define the minimum viable product (MVP). That is iterative as the customer and the product team go back and forth with features that are must-haves. Then the startup team tests that candidate MVP with a few other customers. ... If you ask any experienced entrepreneur, investor or board member what the most important thing a startup CEO must stay on top of is, it’s to know at all times how much cash they have, what the monthly burn rate is and how long the runway is before cash runs out. Many mistakes are excusable and recoverable, but running out of cash by surprise is neither. ... Culture is not pizza and beer on Fridays, foosball tables or little rooms filled with toys. It is about the values of the company and how they are espoused. It is about the tone the CEO sets and how they communicate with all of their constituents. And the importance of culture is not not just about company morale, although that is very important. It is about attracting and retaining the best talent. While it might be nice to think you can put this off while focusing on the first four things, you would be wrong.


Empowering Developers to Harness Sensor Data for Advanced Analytics

Data from sensors offers a treasure trove of insights from the physical world for data scientists. From tracking temperature fluctuations in a greenhouse to analyzing the vibrations of industrial machines in a manufacturing plant, these tiny devices capture crucial information that can be used for groundbreaking research and development. The journey from collecting raw sensor data to actionable analysis can be riddled with stumbling blocks, as the realities of hardware components and environmental conditions come into play. The typical approach to sensor data capture often involves a cumbersome workflow across the various teams involved, including data scientists and engineers. While data scientists meticulously define sensor requirements and prepare their notebooks to process the information, engineers deal with the complexities of hardware deployment and software updates that reduce the scientists’ ability to quickly adjust these variables on the fly. This creates a long feedback loop that delays the pace of innovation across the organization.


To lead a technology team, immerse yourself in the business first

When asked to rank the defining characteristics of a leading CIO, respondents were split between the conventional and contemporary, saying the traditional, more IT-centric qualities are just as important as the strategic and more customer-focused ones. While aligning tech vision and strategy with the business has been the role of CIOs and technology leaders for some time, the scope of their duties now extends deeper into the business itself. "Establishing and managing a tech vision isn't enough," said DiLorenzo. "Today's CIOs need to own all the various technology uses across their organizations and ensure they're actively coordinating and orchestrating their fellow tech leaders -- as well as their business peers -- to co-create a vision and tech strategy that aligns with, and furthers, the overall enterprise strategy." Getting to a leadership position also requires immersing oneself in the business, Shaikh advised. "Business acumen, which includes understanding various business functions and industry dynamics, can be cultivated by spending time in business units," she said. "This understanding is crucial for strategic thinking, to help identify opportunities where technology can impact goals."


The unseen gen AI revolution on the AI PC and the edge

The shift towards edge and PC-based AI is not without its challenges. Privacy and security concerns are paramount, as devices become more autonomous and capable of processing sensitive data. Companies must focus on privacy and AI ethics to be the cornerstone of their approach, ensuring that as AI becomes more integrated into our devices, it does so in a manner that respects user privacy and trust. Moreover, the energy efficiency of AI workloads is a critical consideration, especially for battery-powered devices. Advancements in low-power, high-performance processors are pivotal in addressing this challenge, ensuring that the benefits of gen AI are not offset by decreased device longevity or increased environmental impact. Intel’s OpenVINO toolkit further enhances these benefits by optimizing deep learning models for fast, efficient performance across Intel’s hardware portfolio. This optimization enables customers to deploy AI applications more widely, even in resource-constrained environments, without sacrificing performance. As we enter this new era, the way we think about gen AI and how we engage with it will continue to change. 


Enhancing Cloud Security in Response to Growing Digital Threats

Security challenges are unique to hybrid cloud environments where public clouds combine with on-premises infrastructure. Secure migration tools and techniques are vital to prevent data leaks or unauthorized access. Encrypt data before transferring and place controls on both ends during migration to reduce associated risks. Network segmentation in hybrid cloud environments requires thorough interconnectivity planning. Carefully configure firewall connections, firewalls, and network access controls to ensure only authorized traffic flows between on-premises resources and those hosted within the cloud. Visibility across hybrid cloud environments requires centralized monitoring to enhance threat detection capability. SIEM solutions can collect security logs from both on-premises and cloud systems, helping provide a unified view of an enterprise’s security posture. The more organizations embrace cloud computing, the more preparation for emerging trends is required. Zero-trust security models, which allow continuous authentication and authorization regardless of the device or location, are increasingly popular.


Ethical Issues in Information Technology (IT)

Establishing ethical IT practices is also important because people’s trust in the tech industry chips away each time they learn about unethical practices, especially in the wake of reports on data usage by companies such as Facebook and Google. “If companies don’t have ethical IT practices in place, they’re going to lose the trust of their customers and clients,” says Ferebee. “IT professionals need to take it seriously. They also need to let the public know they take it seriously so the public feels safe using their products and services.” Whether or not you’re in a leadership position, it is important to lead by example when it comes to ethics in IT. “People are often afraid to speak up because they’re concerned with the repercussions,” says Ferebee. “But when it comes to ethics in IT, you need to speak up — lead by example, advocate for it, and talk about it all the time. That could include reporting ethical issues, sourcing or creating and then implementing ethics training, and developing internal frameworks for your IT department. You don’t have to be the director of IT to start implementing this.”


Establishing Trust in AI Systems: 5 Best Practices for Better Governance

Security culture drives both behaviors and beliefs. A security-first organization promotes information sharing, transparency, and collaboration. When risks are discovered, or when issues occur, communication should be immediate and designed to clearly convey to employees how their behaviors and actions can both support and detract from security efforts. Enlist employees in these efforts by ensuring that your culture is positive and supportive. ... Security culture does not exist in a vacuum and does not evolve in a silo. Input from a wide range of stakeholders—from employees to customers and partners, regulators and the board—is critical for ensuring that you understand how AI is enabling efficiencies, and where risks may be emerging. ... By seeking input from key constituents in an open and transparent manner, they will be more likely to share their concerns and help uncover potential risks while there’s still time to adequately address those risks. Acknowledge and respond to feedback promptly and highlight the positive impacts of that feedback.Tackling third-party risks



Quote for the day:

"Don't wait for the perfect moment take the moment and make it perfect." -- Aryn Kyle

Daily Tech Digest - December 21, 2023

The New HR Playbook: Catalyze Innovation With Analytics And AI

Metaverse and blockchain technologies — underpinned by data and AI — also offer a lot of possibilities for improving HR practices. The metaverse, a shared virtual space bridging physical and digital realities, offers avenues for remote workspaces and virtual collaboration. It can enhance recruitment, onboarding, training, and development processes by providing immersive and interactive experiences that engage candidates and employees on a new level. The metaverse could also help companies with decentralized teams cultivate a strong organizational culture by giving employees a shared virtual space for interaction and engagement. Blockchain technology offers transparency and security that can have profound implications for HR processes. HR departments can use blockchain to improve the security of record-keeping, verify employee credentials, and simplify benefits administration. Blockchain can also streamline payroll processes, especially for international employees. Companies can even use blockchain to create decentralized, employee-driven platforms for collaboration and communication.


Why 2024 will be the year of the CISO

As the ESG/ISSA research indicates, many fed-up CISOs will retire, while others will move on to become virtual CISOs (vCISOs) or take field CISO positions with security technology vendors. We'll read numerous stories next year about CISOs up and quitting on the spur of the moment. While the reasons won't be disclosed, you can bet they are among those cited above. Competition for qualified candidates will be fierce. On a side note, I don't believe there is a significant population of next-generation CISO candidates with the right experience to step up. In 2024, we will augment our general discussion of the global cybersecurity skills shortage with a specific addendum about the CISO shortage. CISO pay and compensation will rise precipitously. Aside from a handful of $1 million positions, CISOs aren't paid nearly as much as one might assume. Salary.com calculates a median salary of about $241,000 with 90% of CISOs making $302,000 or less. Given the job requirements (long hours, stress, being on-call, etc.), this isn't very much. With the competition for candidates, firms will greatly increase base pay, perks, and bonuses, leading to hyper CISO salary inflation.


Hot Jobs in AI/Data Science for 2024

“The new and highly specialized role known as the ‘LLM Engineer’ is primarily found within organizations that have reached an advanced stage in their AI journey, having conducted numerous experiments but now facing challenges in the operationalization of their AI models at scale,” says Kjell Carlsson, head of data science strategy and evangelism at Domino Data Lab. ... “Some of the most sought-after AI positions today include machine learning engineer, AI engineer, and AI architect,” says Shmuel Fink, Chair Master of Science in data analytics, Touro University Graduate School of Technology. “Nevertheless, several other AI roles are also gaining prominence, such as AI ethicist, AI product manager, AI researcher, computer vision engineer, robotics engineer, and AI safety engineer. Moreover, there are positions that require industry-specific expertise, like a healthcare AI engineer.” But back at the ranch, employees in any job role will become more valuable if they possess AI skills. As they gain those skills, some specialized job roles will evolve while others disappear.


How Blockchain Will Change Organizations

The fact that blockchain is a distributed database means it is very difficult to delete data. Once something has been recorded on the blockchain, it becomes part of the permanent record. This traceability of data is another key advantage of blockchain technology. The data stored on a blockchain is immutable, meaning that it cannot be changed or deleted. This traceability can be useful for tracking the provenance of goods and tracing the origins of data. It also has implications for compliance, as organizations will be able to show exactly what data they have and where it came from. ... Under the traditional centralized model, organizations have complete control over the data they store. However, individuals have full control over their data with blockchain technology. This is because each user has a private key, which is used to access their data. Individuals have complete control over their data, which is a key advantage of blockchain technology. It means that users can be sure that their data is safe and secure and that they can share it with whomever they choose.


Industry Impact: Celebrating IT's Milestones and Achievements This Year

The integration of AI into various solutions, including observability, IT service management, and database solutions, has allowed for greater automation of the mundane tasks that often bog down IT pros and hinder organizations from accelerating their digital transformations. AI-powered capabilities free up valuable time for IT pros, allowing them to focus on the most important tasks at hand. Autonomous operations, enabled by purpose-built models for IT operations and large language models, are poised to revolutionize IT environments in the coming years, reducing operation costs and bettering the lives of those in the tech workforce. ... The IT industry has a smorgasbord of accomplishments that have enriched the digital lives of organizations this year. The industry’s cloud migration journey, in particular, has played a central role in allowing organizations to scale their operations and pivot rapidly in response to market conditions. The cloud journey has transformed the way businesses operate, offering scalability, flexibility, and cost-efficiency. 


An IT Carol: How the Ghosts of IT Past and Present Can Help Improve the Future

You see yourself sitting at your desk, frantically trying to juggle more service desk tickets than you ever thought were possible. The trip to the future also shows the vast number of new complex systems that teams are using. As applications, networks, databases, and infrastructures grew in complexity, so did the tools and solutions we need to manage them. This has created a future where IT pros are trying to navigate and manage some of the most complex systems and environments imaginable. Teams are more overworked than ever before. You spend so much time fighting fires that you have no time to build better technology that provides important new capabilities. You have almost no time to think about anything else, let alone spend the holiday with family or friends. Thankfully, this is not a future that has to be, but rather one we can avoid if we take the right steps today. Right now, we are on the path to improving the lives of IT teams through the integration of artificial intelligence (AI). IT solutions powered by AI, such as observability and ITSM, can help manage the complex IT environments we are witnessing through ongoing digital transformation and the move to the cloud.


Why data, AI, and regulations top the threat list for 2024

Some of the essential questions security teams ought to be asking themselves include: How do we manage and safeguard aspects like confidentiality, integrity, and availability of data? What strategies can we employ to protect our data against cyber threats and misuse? How do we address the security challenges that emerge with expanding data repositories? How do we differentiate between valuable data and redundant information? Furthermore, there’s often a misalignment in how data is structured versus the business framework. Consequently, security teams may need to engage in discussions with business units to clarify issues such as how we are applying our data. With whom is this data being shared? ... Although AI technologies aren’t new, the recent widespread adoption of AI has introduced a myriad of business and security challenges for organizations. Key questions to consider include: How do we monitor AI usage within the organization? How do we regulate the data shared with AI systems by employees? How do we ensure ongoing compliance with ethical standards and legal requirements?


2023 - The year of transformation and harmonisation

Millennial leaders bring a distinctively dynamic, digitalised approach to their roles, characterised by agility, openness, proactiveness, and hands-on engagement. Their adeptness in navigating the digital landscape seamlessly allows them to forge strong connections within their predominantly Gen Z and millennial workforce. This workforce, in turn, embodies an informed, forward-looking, and tech-savvy ethos, driven by cutting-edge technologies that facilitate smart and efficient work practices. In the world of leading-edge technologies, the arrival of Chat GPT by OpenAI in the preceding November continued to take centre stage. Throughout the year, there was a surge in competition and discussions surrounding AI, particularly generative AI, which gained momentum. Amidst these discussions, Google's introduction of Bard added fervour to the debate, igniting intense conversations about the potential impact of generative AI on employment and the perceived threat to various job roles. This stirred a pot of mixed emotions—feelings of anxiety, uncertainty, and ambiguity swirled within the tech sphere.


Small businesses lead the way, while larger industries lag in tech adoption

On the other hand, many leaders in the small and mid-sized industrial sector are in the age group of 50 and above. When they initially embarked on their careers in the core industry, the adoption of IT and technology in their companies was significantly lower. Technology was not as pervasive, and IT integration was often considered an unnecessary expense. For those who did attempt computerisation in the early 2000s, the experience was often disheartening. Small IT companies that provided software solutions during that period often faced challenges and many even disappeared. The owners of these companies, faced with the uncertainty and challenges of running a technology-based business, opted for well-paying jobs instead. This experience left a lasting impact on their perception of technology and its role in business operations. Moreover, the proliferation of the internet and the rise of startups introduced a new paradigm. Many services and software were offered for free or at significantly reduced rates, fostering an expectation of inexpensive or cost-free technology solutions. This demotivated many software company owners from continuing in the business. 


What’s Ahead for AI In 2024: The Transformative Journey Continues

The coming year will see a shift in how generative AI is employed by businesses, with a greater emphasis on using organizational data. Companies are increasingly cautious about sharing sensitive data on public platforms, opting instead to host private foundation models within their four walls. This move is driven by concerns over data security and the desire to customize AI applications to specific organizational needs. By using their own data, companies can ensure that AI output is relevant and in context. This trend will lead to innovative applications of generative AI in a variety of business functions. ... New tuning techniques such as prompt tuning and retrieval augmented generation (RAG) will gain popularity next year. These methods provide more context-specific adjustments to AI models without the need for extensive retraining. Prompt tuning, for example, uses smaller pre-trained models to encode text prompts; RAG combines specific information with prompts to enhance the relevance of the model's output.



Quote for the day:

"People who avoid failure also avoid success.” -- Robert T. Kiyosaki

Daily Tech Digest - December 11, 2023

Enterprise Architecture – Supporting Resources on Demand

As the subscription economy grows, the market could become saturated with providers offering varying levels of service quality. Businesses should carefully evaluate their options, considering factors such as customer support, scalability, and the sophistication of available resources. The positive impact of selling EA as a subscription service, however, is clear. With more service providers offering cloud solutions, there is more competition for your business. You, as the business customer, have more options, which can lead to better services and pricing. Business customers of all sizes can get access to advanced technology and data storage capabilities through a subscription. This can open economic doors to developing nations, allowing business growth to more players who would otherwise not be able to participate in a digital transformation journey. This fosters a more inclusive and diverse tech landscape, where breakthroughs can emerge from unexpected corners of the business world. You can focus on growing your core business without the traditional burdens of upfront investment and the complexity of building and managing infrastructure from scratch.


Trends in Data Governance and Security: What to Prepare for in 2024

In 2023, many companies turned to do-it-yourself (DIY) data governance to manage their data. Yet, without seeking the help of data governance experts or professionals, this proved to be insufficient due to compliance gaps and the data security errors it leaves in its wake. While do-it-yourself data governance seemed like a cost-effective solution, it has serious consequences for companies leaving them exposed to data breaches and other security threats. This is because DIY data governance often lacks the comprehensive security protocols and expertise that professional data governance provides leading to both data breaches and other security threats. Worse, the approach often involves piecemeal solutions that do not integrate well with each other, creating security gaps and leaving data vulnerable to attack. As a result, DIY data governance may not be able to keep up with the constantly evolving data privacy landscape, including new regulations and compliance requirements. Companies that rely on do-it-yourself data governance are exposing themselves to significant risks and will see the repercussions of this in 2024. 


Generative AI is off to a rough start

One big problem, among several others that Duckbill Chief Economist Corey Quinn highlights, is that although AWS felt compelled to position Q as significantly more secure than competitors like ChatGPT, it’s not. I don’t know that it’s worse, but it doesn’t help AWS’ cause to position itself as better and then not actually be better. Quinn argues this comes from AWS going after the application space, an area in which it hasn’t traditionally demonstrated strength: “As soon as AWS attempts to move up the stack into the application space, the wheels fall off in major ways. It requires a competency that AWS does not have and has not built up since its inception.” Perhaps. But even if we accept that as true, the larger issue is that there’s so much pressure to deliver on the hype of AI that great companies like AWS may feel compelled to take shortcuts to get there (or to appear to get there). The same seems to be true of Google. The company has spent years doing impressive work with AI yet still felt compelled to take shortcuts with a demo. As Parmy Olson captures, “Google’s video made it look like you could show different things to Gemini Ultra in real time and talk to it. You can’t.”


CIOs grapple with the ethics of implementing AI

Even with a team focused on AI, identifying risks and understanding how the organization intends to use AI both internally and publicly is challenging, McIntosh says. Team members must also understand and address the inherent possibility of AI bias, erroneous claims, and incorrect results, he says. “Depending on the use cases, the reputation of your company and brand may be at stake, so it’s imperative that you plan for effective governance.” With that in mind, McIntosh says it’s critical that CIOs “don’t rush to the finish line.” Organizations must create a thorough plan and focus on developing a governance framework and AI policy before implementing and exposing the technology. Identifying appropriate stakeholders, such as legal, HR, compliance and privacy, and IT, is where Plexus started its ethical AI process, McIntosh says. “We then created a draft policy to outline the roles and responsibilities, scope, context, acceptable use guidelines, risk tolerance and management, and governance,” he says. “We continue to iterate and evolve our policy, but it is still in development. We intend to implement it in Q1 2024.”


Accenture takes an industrialized approach to safeguarding its cloud controls

Accenture developed a virtual cloud control factory to support five major, global cloud infrastructure providers and enable reliable inventory; consistent log and alert delivery to support security incident detection; and predictable, stable, and repeatable processes for certifying cloud services and releasing security controls. The factory features five virtual "departments". There's research and development, which performs service certification, control definition, selection, measurement, and continual re-evaluation; the production floor designs and builds control; quality assurance tests the controls; shipping and receiving integrates controls with compliance reporting tools; and customer service provides support to users after a control goes live. "What we decided to do was centralize that cloud control development, get all the needs into one place, start organizing them in a way that we could run them through a factory and get them out there so people can use common controls, common architecture that had a chance of keeping up with [our engineers'] innovation sitting on top of the [major cloud platforms'] innovation," Burkhardt says


Pressure on Marketers Will Drive Three Key Data Moves in 2024

Data clouds help achieve that goal. In both time and expense, organizations can no longer afford to jump between different systems to try to make sense of what a customer wants and formulate a real-time response in the moment of interaction. With a CDP sitting directly on top of a data cloud, it is easier and less expensive to build a unique customer profile and then activate that profile across multiple systems. Organizations recognize that first-party data is a valuable asset and is the foundation for delivering a personalized customer experience (CX), but for too long business users have been stymied by complex, unintegrated marketing stacks and time-consuming data transformations. That approach to making data actionable -- turning data into insight -- is no longer sustainable when customers expect real-time, personalized experiences that are consistent across channels. ... Moving to a data cloud and coupling it with a CDP’s automated data quality and identity resolution addresses these issues head-on, and that trend will continue -- particularly for customer-facing brands that see a data cloud with an enterprise-grade CDP as a relatively fast, inexpensive way to monetize their customer data.


Initial Agile Requirements and Architecture Modeling

Talk to most agilists, and particularly the purists, and they’ll claim that they don’t do any modeling up front. This of course is completely false, they just use different terminology such as “populate the backlog” rather than initial requirements modeling and “identify a runway” instead of initial architecture modeling. Sigh. Some of the more fervent agilists may even tell you about the evils of big modeling up front which is why they choose to eschew anything that smells like up-front thinking. ... The goal of initial architecture modeling on an agile team is to identify what the team believes to be a viable strategy for building the solution. Sufficiency is determined by your stakeholders – Can you exhibit an understanding of the existing environment, and the future direction of your organization, and show how your proposed strategy reflects that? Your initial architecture model should be JBGE in that it addresses, at a high-level, the business and technical landscapes that your solution will operate within. This modeling effort is often led, not dictated, by the architecture owner on your team.


Why are IT professionals not automating?

25% of participants highlighted cost and resource as potential obstacles. They wonder if they need to create a custom solution and, if so, whether it’s cost-effective or cheaper to continue with manual maintenance. They are also concerned about the resources required to maintain an automated solution. 20% admit that they and their teams lack the knowledge or expertise to choose an automated solution. They are not familiar with automation in general or the specific requirements of automating their systems. The survey results clearly indicate that many IT professionals are not familiar with or don’t see the value of certificate automation. Or is it that they didn’t think about it enough? After all, certificates have been part of our IT infrastructure for a very long time, while they are not exciting, they do work, so why fix something that is not broken? Unfortunately, when the 90-day Google edict eventually becomes reality, it will increase the need for renewal/replacement of SSL/TSL certificates by four times (4X) the current pace. IT professionals may be underestimating the burden that it will put on their teams. 


How Could AI Be a Tool for Workers?

The benefits for companies designing and using AI systems are vast and readily apparent. Tools that can complete work in a fraction of the time at a fraction of the cost are a boon for the bottom line. “The main beneficiaries of the technology are global technology giants primarily based in the United States,” says Michael Allen, CTO of enterprise content management company Laserfiche. He points out that these companies have the resources to accrue the massive amounts of data required to train AI models. Companies that adopt these powerful AI models can leverage them to cut costs. Allen points out that many companies will likely use AI to shift away from outsourcing. “A lot of firms outsource mostly routine clerical work to places like India, and I believe that's going to be threatened or impacted significantly by AI that will be able to do that work faster and cheaper,” he says. The way that AI devalues entry-level work is already being seen. Stephanie Bell is a senior research scientist at the nonprofit coalition Partnership on AI, which created guidelines to ensure AI economic benefits are shared. She offers examples in the digital freelance market. 


Bryan Cantrill on AI Doomerism: Intelligence Is Not Enough

Cantrill had titled his talk “Intelligence is not enough: the humanity of engineering.” Here the audience realizes they’re listening to the proud CTO of a company that just shipped its own dramatically redesigned server racks. “I want to focus on what it takes to actually do engineering… I actually do have a bunch of recent experience building something really big and really hard as an act of collective engineering…” Importantly, the common thread for these bugs was “emergent” properties — things not actually designed into the parts, but emerging when they’re all combined together. “For every single one of those, there is no piece of documentation. In fact, for several of those, the documentation was actively incorrect. The documentation would mislead you ... Cantrill put up a slide saying “Intelligence alone does not solve problems like this,” presenting his team at Oxide as possessed of something uniquely human. “Our ability to solve these problems had nothing to do with our collective intelligence as a team…” he tells his audience. “We had to summon the elements of our character. Not our intelligence — our resilience.”



Quote for the day:

“I'd rather be partly great than entirely useless.” -- Neal Shusterman