Showing posts with label cto. Show all posts
Showing posts with label cto. Show all posts

Daily Tech Digest - May 06, 2025


Quote for the day:

"Limitations live only in our minds. But if we use our imaginations, our possibilities become limitless." --Jamie Paolinetti


A Primer for CTOs: Taming Technical Debt

Taking a head-on approach is the most effective way to address technical debt, since it gets to the core of the problem instead of slapping a new coat of paint over it, Briggs says. The first step is for leaders to work with their engineering teams to determine the current state of data management. "From there, they can create a realistic plan of action that factors in their unique strengths and weaknesses, and leaders can then make more strategic decisions around core modernization and preventative measures." Managing technical debt requires a long-term view. Leaders must avoid the temptation of thinking that technical debt only applies to legacy or decades old investments, Briggs warns. "Every single technology project has the potential to add to or remove technical debt." He advises leaders to take a cue from medicine's Hippocratic Oath: "Do no harm." In other words, stop piling new debt on top of the old. ... Technical debt can be useful when it's a conscious, short-term trade-off that serves a larger strategic purpose, such as speed, education, or market/first-mover advantage, Gibbons says. "The crucial part is recognizing it as debt, monitoring it, and paying it down before it becomes a more serious liability," he notes. Many organizations treat technical debt as something they're resigned to live with, as inevitable as the laws of physics, Briggs observes. 


AI agents are a digital identity headache despite explosive growth

“AI agents are becoming more powerful, but without trust anchors, they can be hijacked or abused,” says Alfred Chan, CEO of ZeroBiometrics. “Our technology ensures that every AI action can be traced to a real, authenticated person—who approved it, scoped it, and can revoke it.” ZeroBiometrics says its new AI agent solution makes use of open standards and technology, and supports transaction controls including time limits, financial caps, functional scopes and revocable keys. It can be integrated with decentralized ledgers or PKI infrastructures, and is suggested for applications in finance, healthcare, logistics and government services. The lack of identity standards suited to AI agents is creating a major roadblock for developers trying to address the looming market, according to Frontegg. That is why it has developed an identity management platform for developers building AI agents, saving them from spending time building ad-hoc authentication workflows, security frameworks and integration mechanisms. Frontegg’s own developers discovered these challenges when building the company’s autonomous identity security agent Dorian, which detects and mitigates threats across different digital identity providers. “Without proper identity infrastructure, you can build an interesting AI agent — but you can’t productize it, scale it, or sell it,” points out Aviad Mizrachi, co-founder and CTO of Frontegg.


Rethinking digital transformation for the agentic AI era

Most CIOs already recognize that generative AI presents a significant evolution in how IT departments can deliver innovations and manage IT services. “Gen AI isn’t just another technology; it’s an organizational nervous system that exponentially amplifies human intelligence,” says Josh Ray, CEO of Blackwire Labs. “Where we once focused on digitizing processes, we’re now creating systems that think alongside us, turning data into strategic foresight. The CIOs who thrive tomorrow aren’t just managing technology stacks; they’re architecting cognitive ecosystems where humans and AI collaborate to solve previously impossible challenges.” IT service management (ITSM) is a good starting point for considering gen AI’s potential. Network operation centers (NOCs) and site reliability engineers (SREs) have been using AIOps platforms to correlate alerts into time-correlated incidents, improve the mean time to resolution (MTTR), and perform root cause analysis (RCA). As generative and agentic AI assists more aspects of running IT operations, CIOs gain a new opportunity to realign IT ops with more proactive and transformative initiatives. ... “Opportunities such as gen AI for hotfix development and predictive AI to identify, correlate, and route incidents for improved incident response are transforming our business, resulting in improved customer satisfaction, revenue retention, and engineering efficiency.”


Strengthening Software Security Under the EU Cyber Resilience Act: A High-Level Guide for Security Leaders and CISOs

One of the hardest CRA areas for organizations to get a handle on is knowing and proving where appropriate controls and configurations are in place vs. where they’re lacking. This lack of visibility often leads to underutilized licenses, unchecked areas of product development, and the potential for unauthorized access into sensitive areas of the development environment. One of the ways security-conscious organizations are combating this is through the creation of “paved pathways” that include very specific technology and security tooling to be utilized across all their development environments, but this often requires extreme vigilance of deviations within those environments and very few ways to automate the adherence to those standards. Legit Security not only automatically inventories and details what and where controls exist within an SDLC so you can ensure 100% coverage of your application portfolio, but we also analyze all of the configurations throughout the entirety of the build process to find any that could allow for supply chain attacks or unauthorized access to SCMs or CI/CD systems. This ensures that your teams are using secure defaults and putting appropriate guardrails into development workflows. This also automates baseline enforcement, configuration management, and quick resets to a known safe state when needed.


Observability 2.0? Or Just Logs All Over Again?

As observability solutions have ostensibly become more mature over the last 15 years, we still see customers struggle to manage their observability estates, especially with the growth of cloud native architectures. So-called “unified” observability solutions bring tools to manage the three pillars, but cost and complexity continue to be major pain points. Meanwhile, the volume of data has kept rising, with 37% of enterprises ingesting more than a terabyte of log data per day. Legacy logging solutions typically deal with the problems of high data volume and cardinality through short retention windows and tiered storage — meaning that data is either thrown away after a fairly short period of time or stored in frozen tiers where it goes dark. Meanwhile, other time series or metric databases take high-volume source data, aggregate it into metrics, then discard the underlying logs. Finally, tracing generates so much data that most traces aren’t even stored in the first place. Head-based sampling retains a small percentage of traces, typically random, while tail-based sampling allows you to filter more intelligently but at the cost of efficient processing. And then traces are typically discarded after a short period of time. There’s a common theme here: While all of the pillars of observability provide different ways of understanding and analyzing your systems, they all deal with the problem of high cardinality by throwing data away.


What it really takes to build a resilient cyber program

A good place to begin is the ‘Identify’ phase from NIST’s Incident Response guide. You need to identify all of your risks, vulnerabilities, and assets. Prioritize them and then determine the best way to protect and detect threats against those assets. Assets not only include physical things like laptops and phones, but also anything that is in a Cloud Service Provider, SaaS applications, and digital items like domain names. Determine the threats, risks and vulnerabilities to those assets. Prioritize them and determine how your organization is going to protect and monitor them. Most organizations don’t have a very good idea of what they actually own, which is why they tend to be reactive and waste time on actions that do not apply to them. How often has a security analyst been asked if a recently disclosed zero-day affects the company? They perform the scans and pull in data manually only to discover they don’t run that piece of software or hardware. ... Many organizations use a red team exercise to try and blame someone or group for a deficiency or even to score an internal political point. That will never end well for anyone. The name of the game is improvement in your security posture and these help identify areas of weakness. There might be things that don’t get fixed immediately, or maybe ever, but knowing that the gap exists is the critical first step. 


Top tips for successful threat intelligence usage

“The value of threat intelligence is directly tied to how well it is ingested, processed, prioritized, and acted upon,” wrote Cyware in their report. This means a careful integration into your existing constellation of security tools so you can leverage all your previous investment in your acronyms of SOARs, SIEMs and XDRs. According to the Greynoise report “you have to embed the TIP into your existing security ecosystem, making sure to correlate your internal data and use your vulnerability management tools to enhance your incident response and provide actionable analytics.” The keyword in that last sentence is actionable. Too often threat intel doesn’t guide any actions, such as kicking off a series of patches to update outdated systems, or remediation efforts to firewall a particular network segment or taking offline an offending device. ... Part of the challenge here is to prevent siloed specialty mindsets from making the appropriate remedial measures. “I’ve seen time and time again when the threat intel or even the vulnerability management team will send out a flash notification about a high priority threat only for it to be lost in a queue because the threat team did not chase it up. It’s just as important for resolver groups to act as it is for the threat team to chase it,” Peck blogged.


How empathy is a leadership gamechanger in a tech-first workplace

Empathy isn’t just about creating a feel-good workplace—it’s a powerful driver of innovation and performance. When leaders lead with empathy, they unlock something essential: a work culture where people feel safe to speak up, take risks, and bring their boldest ideas to life. That’s where real progress happens. Empathy also enhances productivity, employees who feel valued and supported are more motivated to perform at their highest potential. Research shows that organisations led by empathetic leaders experience a 20% increase in customer loyalty, underscoring the far-reaching impact of a people-first approach. When employees thrive, so do customer relationships, business outcomes, and overall organisational growth. In India, where workplace dynamics are often shaped by hierarchical structures and collectivist values, empathetic leadership can be transformative. By prioritising open communication, recognition, and personal development, leaders can strengthen employee morale, increase job satisfaction, and drive long-term loyalty. ... In a tech-first world, empathy isn’t a nice-to-have, it’s a leadership gamechanger. When leaders lead with heart and clarity, they don’t just inspire people, they unlock their full potential. Empathy fuels trust, drives innovation, and builds workplaces where people and ideas thrive. 


Analyzing the Impact of AI on Critical Thinking in the Workplace

Instead of generating content from scratch, knowledge workers increasingly invest effort in verifying information, integrating AI-generated outputs into their work, and ensuring that the final outputs meet quality standards. What is motivating this behavior? Some explanations for these trends could be to enhance work quality, develop professional AI skills, laziness, and the desire to avoid negative outcomes like errors. For example, someone who is not very proficient in the English language could use GenAI to make their emails sound a lot more natural and avoid any potential misunderstandings. On the flipside, there are some drawbacks to using GenAI. These include overreliance on GenAI for routine or lower-stakes tasks, time pressures, limited awareness of potential AI pitfalls, and challenges in improving AI responses. ... The findings suggest that GenAI tools can reduce the perceived cognitive load for certain tasks. However, they find that GenAI poses risks to workers’ critical thinking skills by shifting their roles from active problem-solvers to AI output overseers who must verify and integrate responses into their workflows. Once again (and this can not be emphasized enough) the study underscores the need for designing GenAI systems that actively support critical thinking. This will ensure that efficiency gains do not come at the expense of developing essential critical thinking skills.


Harnessing Data Lineage to Enhance Data Governance Frameworks

One of the most immediate benefits is improved data quality and troubleshooting. When a data quality issue arises, data lineage’s detailed trail can help you to quickly identify where the problem originated, so that you can fix errors and minimize downtime. Data lineage also enables better planning, since it allows you to run more effective data protection impact analysis. You can map data dependencies to assess how changes like system upgrades or new data integrations might affect your overall data integrity. This is especially valuable during migrations or major updates, as you can proactively mitigate any potential disruptions. Furthermore, regulatory compliance is also greatly enhanced through data lineage. With a complete audit trail documenting every data movement and transformation, organizations can more easily demonstrate compliance with regulations like GDPR, CCPA, and HIPAA. ... Developing a comprehensive data lineage framework can take substantial time, not to mention significant funds. In addition to the various data lineage tools, you might also need to have dedicated hosting servers, depending on the level of compliance needed, or to hire data lineage consultants. Mapping out complex data flows and maintaining up-to-date lineage in a data landscape that’s constantly shifting requires continuous attention and investment.

Daily Tech Digest - March 12, 2025


Quote for the day:

"People may forget what you say, but they won't forget how you made them feel." -- Mary Kay Ash



Rethinking Firewall and Proxy Management for Enterprise Agility

Firewall and proxy management follows a simple rule: block all ports by default and allow only essential traffic. Recognizing that developers understand their applications best, why not empower them to manage firewall and proxy changes as part of a “shift security left” strategy? In practice, however, tight deadlines often lead developers to implement overly broad connectivity – opening up to the complete internet – with plans to refine later. Temporary fixes, if left unchecked, can evolve into serious vulnerabilities. Every security specialist understands what happens in practice. When deadlines are tight, developers may be tempted to take shortcuts. Instead of figuring out the exact needed IP range, they open connectivity to the entire internet with the intention of fixing this later. ... Periodically auditing firewall and proxy rule sets is essential to maintaining security, but it is not a substitute for a robust approval process. Firewalls and proxies are exposed to external threats, and attackers might exploit misconfigurations before periodic audits catch them. Blocking insecure connections on a firewall when the application is already live requires re-architecting the solution, which is costly and time-consuming. Thus, preventing risky changes must be the priority.


Multicloud: Tips for getting it right

It’s obvious that a multicloud strategy — regardless of what it actually looks like — will further increase complexity. This is simply because each cloud platform works with its own management tools, security protocols and performance metrics. Anyone who wants to integrate multicloud into their IT landscape needs a robust management system that can handle the specific requirements of the different environments while ensuring an overview and control across all platforms. This is necessary not only for reasons of handling and performance but also to be as free as possible when choosing the optimal provider for the respective application scenario. This requires cross-platform technologies and tools. The large hyperscalers do provide interfaces for data exchange with other platforms as standard. ... In general, anyone pursuing a multicloud strategy should take steps in advance to ensure that complexity does not lead to chaos but to more efficient IT processes. Security is one of the main issues. And it is twofold: on the one hand, the networked services must be protected in themselves and within their respective platforms. On the other hand, the entire construct with its various architectures and systems must be secure. It is well known that the interfaces are potential gateways for unwelcome “guests”.


FinOps and AI: A Winning Strategy for Cost-Efficient Growth

FinOps is a management approach focused on shared responsibility for cloud computing infrastructure and related costs. ... Companies are attempting to drink from the AI firehose, and unfortunately, they’re creating AI strategies in real-time as they rush to drive revenue and staff productivity. Ideally, you want a foundation in place before using AI in operations. This should include an emphasis on cost management, resource allocation, and keeping tabs on ROI. This is also the focus of FinOps, which can prevent errors and improve processes to further AI adoption. ... To begin, companies should create a budget and forecast the AI projects they want to take on. This planning is a pillar of FinOps and should accurately assess the total cost of initiatives, emphasizing resource allocation (including staffing) and eliminating billing overruns. Cost optimization can also help identify opportunities and reduce expenses. The new focus on AI services in the cloud could drive scalability and cost efficiency as they are much more sensitive to overruns and inefficient usage. Even if organizations are not implementing AI into end-user workloads, there is still an opportunity to craft internal systems utilizing AI to help identify operational efficiencies and implement cost controls on existing infrastructure.


3 Signs Your Startup Needs a CTO — But Not As a Full-Time Hire

CTO as a service provides businesses with access to experienced technical leadership without the commitment of a full-time hire. This model allows startups to leverage specialized expertise on an as-needed basis. ... An on-demand expert can bridge this gap by offering leadership that goes beyond programming. This model provides access to strategic guidance on technology choices, project architecture and team dynamics. During a growth phase, mistakes in management won't be forgiven. ... Hiring a full-time CTO can strain tight budgets, diverting funds from critical areas like product development and market expansion. However, with the CTO as a service model, companies can access top-tier expertise tailored to their financial capabilities. This flexibility allows startups to engage a tech strategist on a project basis, paying only for the high-quality leadership they need when they need it (and if needed). ... Engaging outsourced expertise offers a viable solution, providing a fresh perspective on existing challenges at a cost that remains accessible, even amid resource constraints. This strategic move allows businesses to tap into a wealth of external knowledge, leveraging insights gained from diverse industry experiences. Such an external viewpoint can be invaluable, especially when navigating complex technical hurdles, ensuring that projects not only survive but thrive. 


How to Turn Developer Team Friction Into a Positive Force

Developer team friction, while often seen as a negative trait, can actually become a positive force under certain conditions, McGinnis says. "Friction can enhance problem-solving abilities by highlighting weaknesses in current processes or solutions," he explains. "It prompts the team to address these issues, thereby improving their overall problem-solving skills." Team friction often occurs when a developer passionately advocates a new approach or solution. ... Friction can easily spiral out of control when retrospectives and feedback focus on individuals instead of addressing issues and problems jointly as a team. "Staying solution-oriented and helping each other achieve collective success for the sake of the team, should always be the No. 1 priority," Miears says. "Make it a safe space." As a leader it's important to empower every team member to speak up, Beck advises. Each team member has a different and unique perspective. "For instance, you could have one brilliant engineer who rarely speaks up, but when they do it’s important that people listen," he says. "At other times, you may have an outspoken member on your team who will speak on every issue and argue for their point, regardless of the situation." 


Enterprise Architecture in the Digital Age: Navigating Challenges and Unleashing Transformative Potential

EA is about crafting a comprehensive, composable, and agile architecture-aligned blueprint that synchronizes an organization’s business processes, workforce, and technology with its strategic vision. Rooted in frameworks like TOGAF, it transcends IT, embedding itself into the very heart of a business. ... In this digital age, EA’s role is more critical than ever. It’s not just about maintaining systems; it’s about equipping organizations—whether agile startups or sprawling, successful enterprises—for the disruptions driven by rapid technological evolution and innovation. ... As we navigate inevitable future complexities, Enterprise Architecture stands as a critical differentiator between organizations that merely survive digital disruption and those that harness it for competitive advantage. The most successful implementations of EA share common characteristics: they integrate technical depth with business acumen, maintain adaptable governance frameworks, and continuously measure impact through concrete metrics. These aren’t abstract benefits—they represent tangible business outcomes that directly impact market position and financial performance. Looking forward, EA will increasingly focus on orchestrating complex ecosystems rather than simply mapping them. 


Generative AI Drives Emphasis on Unstructured Data Security

As organizations pivot their focus, the demand for vendors specializing in security solutions, such as data classification, encryption and access control, tailored to unstructured data is expected to increase. This increased demand reflects the necessity for robust and adaptable security measures that can effectively protect the vast and varied types of unstructured data organizations now manage. In tandem with this shift, the rising significance of unstructured data in driving business value and innovation compels organizations to develop expertise in unstructured data security. ... Organizations should prioritize investment in security controls specifically designed for unstructured data. This includes tools with advanced capabilities such as rapid data classification, entitlement management and unclassified data redaction. Solutions that offer prompt engineering and output filtering can also further enhance data security measures. ... Building a knowledgeable team is crucial for managing unstructured data security. Organizations should invest in staffing, training and development to cultivate expertise in this area. This involves hiring data security professionals with specialized skills and providing ongoing education to ensure they are equipped to handle the unique challenges associated with unstructured data. 


Quantum Pulses Could Help Preserve Qubit Stability, Researchers Report

The researchers used a model of two independent qubits, each interacting with its own environment through a process called pure dephasing. This form of decoherence arises from random fluctuations in the qubit’s surroundings, which gradually disrupt its quantum state. The study analyzed how different configurations of PDD pulses — applying them to one qubit versus both — affected the system’s evolution. By employing mathematical models that calculate the quantum speed limit based on changes in quantum coherence, the team measured the impact of periodic pulses on the system’s stability. When pulses were applied to both qubits, they observed a near-complete suppression of dephasing, while applying pulses to just one qubit provided partial protection. Importantly, the researchers investigated the effects of different pulse frequencies and durations to determine the optimal conditions for coherence preservation. ... While the study presents promising results, the effectiveness of PDD depends on the ability to deliver precise, high-frequency pulses. Practical quantum computing systems must contend with hardware limitations, such as pulse imperfections and operational noise, which could reduce the technique’s efficiency.


Disaster Recovery Plan for DevOps

While developing your disaster recovery Plan for your DevOps stack, it’s worth considering the challenges DevOps face in this view. DevOps ecosystems always have complex architecture, like interconnected pipelines and environments (e., GitHub and Jira integration). Thus, a single failure, whether due to a corrupted artifact or a ransomware attack, can cascade through the entire system. Moreover, the rapid development of DevOps creates constant changes, which can complicate data consistency and integrity checks during the recovery process. Another issue is data retention policies. SaaS tools often impose limited retention periods – usually, they vary from 30 to 365 days. ... your backup solution should allow you to:Automate your backups, by scheduling them with the most appropriate interval between backup copies, so that no data is lost in the event of failure,
Provide long-term or even unlimited retention, which will help you to restore data from any point in time. Apply the 3-2-1 backup rule and ensure replication between all the storages, so that in case one of the backup locations fails, you can run your backup from another one. Ransomware protection, which includes AES encryption with your own encryption key, immutable backups, restore and DR capabilities


The state of ransomware: Fragmented but still potent despite takedowns

“Law enforcement takedowns have disrupted major groups like LockBit but newly formed groups quickly emerge akin to a good old-fashioned game of whack-a-mole,” said Jake Moore, global cybersecurity advisor at ESET. “Double and triple extortion, including data leaks and DDoS threats, are now extremely common, and ransomware-as-a-service models make attacks even easier to launch, even by inexperienced criminals.” Moore added: “Law enforcement agencies have struggled over the years to take control of this growing situation as it is costly and resource heavy to even attempt to take down a major criminal network.” ... Meanwhile, enterprises are taking proactive measures to defend against ransomware attacks. These include implementing zero trust architectures, enhancing endpoint detection and response (EDR) solutions, and conducting regular exercises to improve incident response readiness. Anna Chung, principal researcher at Palo Alto Networks’ Unit 42, told CSO that advanced tools such as next-gen firewalls, immutable backups, and cloud redundancies, while keeping systems regularly patched, can help defend against cyberattacks. Greater use of gen AI technologies by attackers is likely to bring further challenges, Chung warned. 

Daily Tech Digest - January 20, 2025

Robots get their ‘ChatGPT moment’

Nvidia implies that Cosmos will usher in a “ChatGPT moment” for robotics. The company means that, just as the basic technology of neural networks existed for many years, Google’s Transformer model enabled radically accelerated training that led to LLM chatbots like ChatGPT. In the more familiar world of LLMs, we’ve come to understand the relationship between the size of the data sets used for training these models and the speed of that training and their resulting performance and accuracy. ... Driving in the real world with a person as backup is time-consuming, expensive, and sometimes dangerous — especially when you consider that autonomous vehicles need to be trained to respond to dangerous situations. Using Cosmos to train autonomous vehicles would involve the rapid creation of huge numbers of simulated scenarios. For example, imagine the simulation of every kind of animal that could conceivably cross a road — bears, dear, dogs, cats, lizards, etc. — in tens of thousands of different weather and lighting conditions. By the end of all this training, the car’s digital twin in Omniverse would be able to recognize and navigate scenarios of animals on the road regardless of the animal and the weather or time of day. That learning would then be transferred to thousands of real cars, which would also know how to navigate those situations.


How to Use AI in Cyber Deception

Adaptation is one of the most significant ways AI improves honey-potting strategies. Machine learning subsets can evolve alongside bad actors, enabling them to anticipate novel techniques. Conventional signature-based detection methods are less effective because they can only flag known attack patterns. Algorithms, on the other hand, use a behavior-based approach. Synthetic data generation is another one of AI’s strengths. This technology can produce honeytokens — digital artifacts purpose-built for deceiving would-be attackers. For example, it could create bogus credentials and a fake database. Any attempt to use those during login can be categorized as malicious because it means they used illegitimate means to gain access and exfiltrate the imitation data. While algorithms can produce an entirely synthetic dataset, they can also add certain characters or symbols to existing, legitimate information to make its copy more convincing. Depending on the sham credentials’ uniqueness, there’s little to no chance of false positives. Minimizing false positives is essential since most of the tens of thousands of security alerts professionals receive daily are inaccurate. This figure may be even higher for medium- to large-sized enterprises using conventional behavior-based scanners or intrusion detection systems because they’re often inaccurate.


How organizations can secure their AI code

Organizations also expose themselves to risks when developers download machine learning (ML) models or datasets from platforms like Hugging Face. “In spite of security checks on both ends, it may still happen that the model contains a backdoor that becomes active once the model is integrated,” says Alex Ștefănescu, open-source developer at the Organized Crime and Corruption Reporting Project (OCCRP). “This could ultimately lead to data being leaked from the company that used the malicious models.” ... Not all AI-based tools are coming from teams full of software engineers. “We see a lot of adoption being driven by data analysts, marketing teams, researchers, etc. within organizations,” Meyer says. These teams aren’t traditionally developing their own software but are increasingly writing simple tools that adopt AI libraries and models, so they’re often not aware of the risks involved. “This combination of shadow engineering with lower-than-average application security awareness can be a breeding ground for risk,” he adds. ... When it comes to securing enough resources to protect AI systems, some stakeholders might hesitate, viewing it as an optional expense rather than a critical investment. “AI adoption is a divisive topic in many organizations, with some leaders and teams being ‘all-in’ on adoption and some being strongly resistant,” Meyer says. 


AI-driven insights transform security preparedness and recovery

IT security teams everywhere are struggling to meet the scale of actions required to ensure IT operational risk remediation from continually evolving threats. Recovering digital operations after an incident requires a proactive system of IT observability, intelligence, and automation. Organizations should first unify visibility across their IT environments, so they can quickly identify and respond to incidents. Additionally, teams need to eliminate data silos to prevent monitoring overload and resolve issues. ... Unfortunately, many companies still lack the foundational elements needed for successful and secure AI adoption. Common challenges include fragmented or low-quality data disperse in multiple silos, lack of coordination, a shortage of specialized talent like data and AI engineers, and the company own culture resistant to change. Fostering a culture of security awareness starts with making security a visible and integral part of everyday operations. IT leaders should focus on equipping employees with actionable insights through tools that simplify complex security issues. Training programs, tailored to different roles, help ensure that teams understand specific threats relevant to their responsibilities. Providing real-time feedback, such as simulated scenarios, builds practical awareness.


AI Is Quietly Steering Your Decisions - Before You Make Them

Agentic AI here is a critical enabler. These systems analyze user data over various modalities, including text, voice and behavioral patterns to predict intentions and influence outcomes. They are more than a handy assistant helping you cross off a to-do list. OpenAI CEO Sam Altman called these agents "AI's killer function," comparing them to "super competent colleagues that know absolutely everything about my whole life - every email, every conversation I've ever had - but don't feel like an extension." And they are everywhere. Microsoft and Google spearheaded chatbot integration into everyday tools, with Microsoft embedding its Bing Chat and AI assistants into Office software and Google enhancing productivity tools such as Workspace with Gemini capabilities. The study cited the example of Meta, which has claimed to achieve human-level play in the game Diplomacy using their AI agent CICERO. The research team behind CICERO, it says, cautions against "the potential danger for conversational AI agents" that "may learn to nudge its conversational partner to achieve a particular objective." Apple's App Intents framework, it explained, has protocols to "predict actions someone might take in the future" and "to suggest the app intent to someone in the future using predictions you [the developer] provide."


Why digital brands investing in AI to replace humans will fail

Despite its strengths, AI cannot (yet) accurately replicate core human qualities such as emotional intelligence, critical thinking, and nuanced judgment. What it can do is automate time consuming, repetitive operations. Rather than attempting to replace human workers, forward-thinking organisations should encourage the power of human-AI collaboration. By approaching AI this way, brands can respond to customers digital problems faster, meaning employees can use the time gained to direct their efforts to complex problem-solving, strategic planning and customer relations. Those that adopt a hybrid approach, to find the optimal balance between AI and human insight, will be most successful. The collaboration between AI-powered tools and human intelligence creates a powerful combination that can strengthen performance, drive innovation, and help deliver a better overall customer experience. ... On the other hand, businesses that are looking to replace workers, and eventually rely solely on AI-generated operations, risk losing the genuine human touch. This loss of authenticity has the potential to alienate customers, leaving them to feel that their experiences with digital brands are insincere and mechanical. 


From devops to CTO: 5 things to start doing now

If you want to be recognized for promotions and greater responsibilities, the first place to start is in your areas of expertise and with your team, peers, and technology leaders. However, shift your focus from getting something done to a practice leadership mindset. Develop a practice or platform your team and colleagues want to use and demonstrate its benefits to the organization. ... One of the bigger challenges for engineers when taking on larger technical responsibilities is shifting their mindset from getting work done today to deciding what work to prioritize and influencing longer-term implementation decisions. Instead of developing immediate solutions, the path to CTO requires planning architecture, establishing governance, and influencing teams to adopt self-organizing standards. ... “If devops professionals want to be considered for the role of CTO, they need to take the time to master a wide range of skills,” says Alok Uniyal, SVP and head of IT process consulting practice at Infosys. “You cannot become a CTO without understanding areas such as enterprise architecture, core software engineering and operations, fostering tech innovation, the company’s business, and technology’s role in driving business value. Showing leadership that you understand all technology workstreams at a company as well as key tech trends and innovations in the industry is critical for CTO consideration.”


The Human Touch in Tech: Why Local IT Support Remains Essential

While AI can handle common issues, complex or unforeseen problems often require creative solutions and in-depth technical expertise. Call center agents, with limited access to resources — and often operating under strict protocols — may be unable to depart from standardized procedures, even when doing so might be beneficial. The collaborative, adaptable problem-solving approach of a skilled, experienced IT technician is often the key to resolving these intricate challenges. Many IT issues require physical intervention and hands-on troubleshooting. Remote support, though helpful, can't always address hardware problems, network configurations, or security breaches that require on-site assessment and repair. Local IT support companies offering on-site visits have a clear advantage in addressing these types of issues efficiently and effectively. ... Local providers often possess a wide range of skills and experience, allowing them to handle a broader spectrum of issues. Their ability to think creatively and collaboratively enables them to address complex problems that may stump call center agents or AI systems. Furthermore, their local presence allows for swift on-site responses to critical situations.


Six ways to reduce cloud database costs without sacrificing performance

Automate data archiving or deletion for unused or outdated records. Use lifecycle policies to move logs older than specific days to cheaper storage or delete them. TTL (Time to Live) is an easier way to perform such data lifecycle. TTL refers to a setting that defines the lifespan of a piece of data (e.g., a record or document) in the database. After the specified TTL expires, the data is automatically deleted or marked for deletion by the database. ... The advantage of consolidating multiple applications to one single database results in fewer instances, hence reducing costs for compute and storage, enabling efficient resource utilisation when workloads have similar usage patterns. The Implementation can follow schema-based isolation where separate schemas for each tenant can be implemented & row-level isolation where a tenant ID column can be used to segment data within tables One example is to host a SaaS platform for multiple customers on a single database instance with logical partitions. ... Creating copies of specific data items can enhance read performance by reducing costly operations. In an e-commerce store example, you’d typically have separate tables for customers, products, and orders. Retrieving one customer’s order history would involve a query that joins the order table with the customer table and product table.


AI, IoT, and cybersecurity are at the heart of our innovation: Sharat Sinha, Airtel Business

At Airtel Business, we understand that cybersecurity is a growing concern for Indian enterprises. With cyberattacks in India projected to reach one trillion per year by 2033, businesses need robust solutions to safeguard their digital assets. That’s where Airtel Secure Internet and Airtel Secure Digital Internet come in. Airtel Secure Internet, in collaboration with Fortinet, provides comprehensive end-to-end protection by integrating Fortinet’s advanced firewall with Airtel’s high-speed Internet Leased Line (ILL). This solution offers 24/7 monitoring, real-time threat detection, and automated mitigation, all powered by Airtel’s Security Operations Centre (SOC) and Fortinet’s SOAR platform. It ensures businesses are protected from a range of cyberthreats while optimising operational efficiency, without the need for large capital investments in security infrastructure. In addition, Airtel Secure Digital Internet, in partnership with Zscaler, uses Zero Trust Architecture (ZTA) to continuously validate user, device, and network interactions. Combining Zscaler’s cloud security with Security Service Edge (SSE) technology, this solution ensures secure cloud access, SSL inspection, and centralised policy enforcement, helping businesses reduce attack surfaces and simplify security management. 



Quote for the day:

"The greatest leader is not necessarily the one who does the greatest things. He is the one that gets the people to do the greatest things." -- Ronald Reagan

Daily Tech Digest - September 01, 2024

Since cyber risk can’t be eliminated, the question that must be answered is: Can cyber risk at least be managed in a cost-effective manner? The answer is an emphatic yes! ... Identify the sources of cyber risk. These sources can be broken down into various categories. More specifically, there are internal and external threats, as well as potential vulnerabilities that are the basis for cyber risk. Identifying these threats and vulnerabilities is not only a logical place to start the process of managing an organization’s cyber risk, it also will help to frame an approach for addressing an organization’s cyber risk. Estimate the likelihood (i.e., probability) that your organization will experience a cyber breach. Of course, any single point estimate of the probability of a cyber breach is just that—an estimate of one possibility from a probability distribution. Thus, rather than estimating a single probability, a range of probabilities could be considered. Estimate the maximum cost to an organization if a cyber breach occurs. Here again, a point estimate of the maximum cost resulting from a cyber-attack is just that—an estimate of one possible cost. Thus, rather than estimating a single cost, a range of costs could be considered.


How AI is Revolutionizing Prosthetics to Refine Movement

AI prosthetics technology is advancing on several fronts. Researchers at the UK's University of Southampton and Switzerland's EPFL University have, for instance, developed a sensor that allows prosthetic limbs to sense wetness and temperature changes. "This capability helps users adjust their grip on slippery objects, such as wet glasses, enhancing manual dexterity and making the prosthetic feel more like a natural part of their body," Torrang says. Multi-texture surface recognition is another area of important research. Advanced AI algorithms, such as neural networks, can be used to process data from liquid metal sensors embedded in prosthetic hands. "These sensors can distinguish between different textures, enabling users to feel various surfaces," Torrang says. "For example, researchers have developed a system that can accurately detect and differentiate between ten different textures, helping users perform tasks that require precise touch." Natural sensory feedback research is also attracting attention. AI can be used to provide natural sensory feedback through biomimetic stimulation, which mimics the natural signals of the nervous system.


From Chaos to Clarity: CTO’s Guide to Successful Software {Code} Refactoring

As software grows, code can become overly complicated and difficult to understand, making modifications and extensions challenging. Refactoring simplifies and clarifies the code, enhancing its readability and maintainability.Signs of poor performance If software performance degrades or fails to meet efficiency benchmarks, refactoring can optimize it and improve its speed and responsiveness. Migration to newer technologies and libraries When migrating legacy systems to newer technologies and libraries, code refactoring ensures smooth integration and compatibility, preventing potential issues down the line.Frequent bugs Frequent bugs and system crashes often indicate a messy codebase that requires cleanup. If your team spends more time tracking down bugs than developing new features, code refactoring can improve stability and reliability.Onboarding team of new developers Onboarding new developers is another instance where refactoring is beneficial. Standardizing the code base ensures new team members can understand and work with it more effectively.Code issues


How to Win the War Against Bad Master Data

The most immediate chance to make a difference lies within your existing dataset. Take the initiative to compare your supplier and customer master data with reliable external sources, such as government databases, regulatory lists, and other trusted entities, to pinpoint discrepancies and omissions. Consider this approach as a form of “data governance as a service,” as a shortcut to data quality where you can rely on comparison with the authoritative data sources to make sure fields are the right length, in the right format, and, even more important, accurate. This task may require significant effort (unless automated master data validation and enrichment is employed), but it can provide an immediate ROI. Each corrected error and updated entry contributes to greater compliance, lower risk and enhanced operational efficiency within the organization. However, many companies lack a consistent process for cleaning data, and even among those with a process in place, the scope and frequency of data cleansing is often insufficient. The best data quality comes from continuous automated cleansing and enrichment.


3 Ways to Boost Cybersecurity Defenses With Limited Resources

Assume-breach accepts that breaches are inevitable, shifting the focus from preventing all breaches to minimizing the impact of a breach through security measures, protocols and tools that are designed with the assumption that an attacker may have already compromised parts of the network. Paired with the assume-breach mindset, these security measures, protocols and tools focus on protecting data, detecting unusual behavior and responding quickly to potential threats. Just as cars are equipped with seatbelts and airbags to reduce the fallout of a crash, assume-breach encourages organizations to put proactive measures in place to reduce the impact and damage when the worst occurs. ... In the event a cyber attack does occur, having a well-tested and resilient plan in place is key to minimize impacts. As the entire organization participates in these practices and trainings, leaders can focus on implementing assume-breach security measures, protocols and tools. These measures should include enhancing real-time visibility, identifying vulnerabilities, blocking known ransomware points and strategic asset segmentation. 


The Future of LLMs Is in Your Pocket

The first reason is that, due to the cost of GPUs, generative AI has broken the near-zero marginal cost model that SaaS has enjoyed. Today, anything bundling generative AI commands a high seat price simply to make the product economically viable. This detachment from underlying value is consequential for many products that can’t price optimally to maximize revenue. In practice, some products are constrained by a pricing floor (e.g., it is impossible to discount 50% to 10x the volume), and some features can’t be launched because the upsell doesn’t pay for the inference cost ... The second reason is that the user experience with remote models could be better: generative AI enables useful new features, but they often come at the expense of a worse experience. Applications that didn’t depend on an internet connection (e.g., photo editors) now require it. Remote inference introduces additional friction, such as latency. Local models remove the dependency on an internet connection. The third reason has to do with how models handle user data. This plays out in two dimensions. First, serious concerns have been about sharing growing amounts of private information with AI systems.


GenOps: learning from the world of microservices and traditional DevOps

How do the operational requirements of a generative AI application differ from other applications? With traditional applications, the unit of operationalisation is the microservice. A discrete, functional unit of code, packaged up into a container and deployed into a container-native runtime such as kubernetes. For generative AI applications, the comparative unit is the generative AI agent: also a discrete, functional unit of code defined to handle a specific task, but with some additional constituent components that make it more than ‘just’ a microservice ... The Reasoning Loop is essentially the full scope of a microservice, and the model and Tool definitions are its additional powers that make it into something more. Importantly, although the Reasoning Loop logic is just code and therefore deterministic in nature, it is driven by the responses from non-deterministic AI models, and this non-deterministic nature is what provides the need for the Tool, as the agent ‘chooses for itself’ which external service should be used to fulfill a task. A fully deterministic microservice has no need for this ‘cookbook’ of Tools for it to select from: Its calls to external services are pre-determined and hard coded into the Reasoning Loop.


Saudi Arabia strengthening cyber resilience through skills development

Currently, four of the top 10 fastest-growing job roles in Saudi Arabia fall within the fields of cybersecurity, data analysis, and software development. As the demand for such expertise far outstrips supply, the government, industry, and academia must collaborate to develop and expand pathways to nurture talent in this field. Enhanced curricula and specialized programs will help upskill students in data protection, while partnerships with global tech companies can facilitate knowledge transfer and provide access to cutting-edge technologies and methodologies in public and private sector organizations. The Saudi government’s investments in initiatives to enhance digital skills, including a $1.2 billion plan to train 100,000 youths by 2030 in critical fields like digital security, are a crucial step in this direction. Saudi Arabia today outpaces the global average in cybersecurity trends, with 3.1 percent compared to the global average of 2.5 percent. An overwhelming 79 percent of Saudi employees anticipate substantial shifts in their work dynamics due to AI advancements. This is reflected in the rise of new learners in the Kingdom who are building skill proficiencies and acquiring new digital skills to boost their economic mobility.


How Financial Firms Can Build Better Data Strategies

For financial organizations, data strategies are often driven by CISOs and tend to focus on data protection and security. This enables regulatory and operational compliance by ensuring the right people can access the right data at the right time while still aligning to the corporate security and risk stance. But this now comes within the context of the emergence of artificial intelligence and the growing sense that firms need to leverage it to gain a competitive edge. For example, loan data can be made more valuable with AI-driven analytics. Or banks can use AI tools to identify patterns related to fraud or compliance challenges to help them avoid potential regulatory pitfalls. ... New tools often seem like the answer to every data challenge, but the right tools build on what’s already in place. To ensure that new solutions deliver a strategic advantage, leaders must ask simple questions: What’s the business need? Why am I pursuing this approach or tool? This requires an honest assessment of current data conditions, business needs and coworker skill sets. While many businesses are fully compliant and meet regulations, their data may not be highly active or in great shape. Identifying issues lets leaders define business needs.


Achieving digital transformation in fintech one step at a time

The fintech industry continues to experience unprecedented growth year after year. According to the Statista survey, there are more than 3 billion customers in this niche worldwide, and this number is expected to have grown to 4.8 billion by 2028. Advanced financial technologies are gradually replacing traditional services. For instance, one of the latest researches by the American Bankers Association has shown that 71% of users prefer managing their financial affairs online (48% of them — with the help of mobile apps), while only 9% of clients would rather go to a physical bank branch. In addition, more and more people around the world are favoring cryptocurrencies over traditional currencies as payment and investment tools. It is proven by the Forbes statistics, which state that the capitalization of the crypto market has exceeded 2.5 trillion dollars. As is well-known, demand creates supply. Such a keen interest in modern fintech instruments and services from users from all over the world generates a constant growth of supply in this sphere. Every year, plenty of new companies emerge, while the existing ones introduce numerous innovations to keep their regular customers and attract new ones.



Quote for the day:

"True greatness, true leadership, is achieved not by reducing men to one's service but in giving oneself in selfless service to them." -- J. Oswald Sanders

Daily Tech Digest - August 12, 2024

In three or four years, ‘we won’t even talk about AI’

In general, there’s a very positive view of AI in tech. In a lot of other industries, there’s some uncertainty, some trepidation, some curiosity. But part of our pulse survey said about three out of four tech workers are using AI on a daily basis. So, the adoption in this portfolio of companies is higher than most, and I’d also said most employers and workers have a very good idea that AI is going to improve their business and their work. ... “I view AI skills as adjacent, additive skills for most people — aside from really hardcore data scientists and AI engineers. This is how most people will work in the new world. Generally, it depends. Some organizations have built whole, distinct AI organizations. Others have built embedded AI domains in all of their job functions. It really depends. There’s a lot of discussion around whether companies should have a chief AI officer. I’m not sure that’s necessary. I think a lot of those functions are already in place. You do need someone in your organization who has a holistic view of the positive sides of this and the risks associated with this.”


The AI Balancing Act: Innovating While Safeguarding Consumer Privacy

There are two sides to every coin. While AI can further compliance efforts, it can also create new privacy and security challenges. This is particularly true today, amid an ongoing global effort to strengthen data privacy laws. 71% of countries have data privacy legislation, and in recent years, this has evolved to encapsulate AI. In the EU, for instance, approval has been secured from the European Parliament around a specific AI regulatory framework. This framework imposes specific obligations on providers of high-risk AI systems and could ban certain AI-powered applications. The fact is, AI-powered technology is immensely powerful. But, it comes with complex challenges to data privacy compliance. A primary concern here relates to purpose limitation, specifically the disclosure provided to consumers regarding the purpose(s) for data processing and the consent obtained. As AI systems evolve, they may find new ways to utilise data, potentially extending beyond the scope of original disclosure and consent agreement. As such, maintaining transparency in AI operations to ensure accurate and appropriate data use disclosures is critical.


Is biometric authentication still effective?

With the rapid advancement and accessibility of technologies, the efficacy and security of biometric authentication methods are under threat. Fraudsters are using spoofing techniques to replicate or falsify biometric data, such as creating synthetic fingerprints or 3D facial models, to fool sensors, mimic legitimate biometric traits and gain unauthorized access to secured services. ... Unlike traditional biometric authentication, which relies on static physical attributes, behavioral biometrics verify user identity based on unique interaction patterns, such as typing rhythm, mouse movements and touchscreen interactions. This shift is essential because behavioral biometrics offer a more dynamic and adaptive layer of security, making it significantly harder for fraudsters to replicate or mask. ... With data scattered across different systems, it’s challenging to correlate information, connect the dots and identify overarching patterns of bad behavior. A decentralized approach causes businesses to overlook crucial fraud indicators and struggle to respond effectively to emerging threats due to the lack of visibility and coordination among disparate fraud prevention tools.


Practical strategies for mitigating API security risks

Identity and access management is crucial for a complete API security strategy. IAM facilitates efficient user management from creation to deactivation and ensures that only authorized individuals access APIs. IAM enables granular access control, granting permissions based on specific attributes and resources rather than just predefined roles. Integration with security information and event management (SIEM) systems enhances security by providing centralized visibility and enabling better threat detection and response. AI and machine learning are revolutionizing API security by providing sophisticated tools that enhance design, testing, threat detection, and overall governance. These technologies improve the robustness and resilience of APIs, enabling organizations to stay ahead of emerging threats and regulatory changes. As AI evolves, its role in API security will become increasingly vital, offering innovative solutions to the complex challenges of safeguarding digital assets. AI in API security goes beyond the limitations of human or rule-based interventions, enabling advanced pattern recognition and automating security audits and governance for greater defense against evolving threats.


The evolution of the CTO – from tech keeper to strategic leader

CTOs have experienced a huge shift in how they are positioned in the workplace. They are no longer part of a small-medium size team that operates separately from the rest of the business; they are the key to tangible business growth and perhaps one of the most crucial parts of a leadership team. The main duty of CTOs is to maintain – and where available, to modernise – tech, and to decide when something has kicked the bucket and no longer has a purpose. These things require people power, specialist skills and money. Needless to say, the investment in the role is vital. Tech leaders often feel burnt out, or worried that they don’t have the resources and support needed to do their job well. ... The saying goes, “You can never set foot in the same river twice,” and the same is true for leaders in tech – everything evolves from the moment you start working on a project. There is much to appreciate about technology that remains stable and adaptable when changes are necessary during development. Today, innovative CTOs are on the lookout for software solutions that come with the flexibility of making that important U-turns if ever needed.


How AIOps Is Transforming IT Operations Management

IT operations management has become increasingly challenging as networks have become larger and more complex, with the introduction of remote workers and the distribution of applications and workloads across networks. Traditional operations management tools and practices struggle to keep up with the ever-growing volumes of data from multiple sources within complex and varied network environments. AIOps was designed to bring the speed, accuracy and predictive capabilities of AI technology to IT operations. AIOps provides contextually enriched, deep end-to-end, real-time insights that can be proactively acted upon, according to Forrester. AIOps solutions use real-time telemetry, developing patterns and historical operational data to perform real-time assessments of what is happening, whether it has happened before or not, what paths it might take, and what negative effects it might have on business operations. ... A "digitally mature" organization has a much better ROI on the AI investment. But because this is a "rolling target" and not static, an organization's IT infrastructure "must be able to adapt and change," Ramamoorthy said.


The cyber assault on healthcare: What the Change Healthcare breach reveals

Many security leaders report that they don’t have adequate resources to implement the needed security measures because they’re often competing with pricey life-saving medical equipment for the limited funds available to spend, Kim says. Furthermore, he says their complex technology environments can make applying and creating security in depth not only more challenging but more costly, too. That, in turn, makes it less likely for CISOs to get the resources they need. Security teams in healthcare also have more challenges in updating and patching systems, Riggi explains, as the sector’s need for 24/7 availability means organizations can’t easily go offline — if they can go offline at all — to perform needed work. Healthcare security leaders also have a rapidly expanding tech environment to secure, as both more partners and more patients with remote medical devices become part of the sector’s already highly interconnected environment, says Errol S. Weiss, chief security officer at Health-ISAC. Such expansion heightens the challenges, complexities and costs of implementing security controls as well as heightening the risks that a successful attack against one point in that web would impact many others.


Solar Power Installations Worldwide Open to Cloud API Bugs

"The issue we discovered lies in the cloud APIs that connect the hardware with the user," both on Solarman's platform and on Deye Cloud, says Bogdan Botezatu, director of threat Research and reporting at Bitdefender. "These APIs have vulnerable endpoints that allow an unauthorized third party to change settings or otherwise control the inverters and data loggers via the vulnerable Solarman and Deye platforms," he says. Bitdefender, for instance, found that the Solarman platform's /oauth2-s/oauth/token API endpoint would let an attacker generate authorization tokens for any regular or business accounts on the platform. "This means that a malicious user could iterate through all accounts, take over any of them and modify inverter parameters or change how the inverter interacts with the grid," Bitdefender said in its report. The security vendor also found Solarman's API endpoints to be exposing an excessive amount of information — including personally identifiable information — about organizations and individuals on the platform. 


6 hard truths of generative AI in the enterprise

“Not a week goes by without another new tool that is mind-blowing in its abilities and potential future impact,’’ agrees David Higginson, chief innovation officer and executive vice president of Phoenix Children’s Hospital. But right now genAI “can really only be executed by a small number of technology giants rather than being tinkered with at a local skunkworks level within a healthcare organization,’’ he says. “Therefore, it feels as if we are in a bit of a paused state waiting for established vendors to deliver mature solutions that can provide the tangible value we all anticipated.” ... The fundamental barriers to adopting genAI are the scarcity and cost of the hardware, power, and data needed to train models, Higginson says. “With such scarcity comes the need to prioritize which solutions have the broadest appeal to the population and can generate the most long-term revenue,’’ he says. ... While research and development continue to push the needle on what genAI can do, “we know that data is a critical aspect to enabling AI solutions and we also recognize that many organizations are uncovering the work it will take to build the right data foundations to support scaled AI deployments,” says Deloitte’s Rowan.


Investing in Capacity to Adapt to Surprises in Software-Reliant Businesses

A well-known and contrarian adage in the Resilience Engineering community is that Murphy's Law - "anything that can go wrong, will" - is wrong. What can go wrong almost never does, but we don't tend to notice that. People engaged in modern work (not just software engineers) are continually adapting what they’re doing, according to the context they find themselves in. They’re able to avoid problems in most everything they do, almost all of the time. When things do go "sideways" and an issue crops up they need to handle or rectify, they are able to adapt to these situations due to the expertise they have. Research in decision-making described in the article Seeing the invisible: Perceptual-cognitive aspects of expertise by Klein, G. A., & Hoffman, R. R. (2020) reveals that while demonstrations of expertise play out in time-pressured and high-consequence events (like incident response), expertise comes from experience with facing varying situations involved with "ordinary" everyday work. It is "hidden" because the speed and ease with which experts do ordinary work contrasts with how sophisticated the work is. 



Quote for the day:

"True leadership must be for the benefit of the followers, not the enrichment of the leaders." -- Robert Townsend