Daily Tech Digest - December 13, 2024

The fintech revolution: How digital disruption is reshaping the future of banking

Several pivotal trends have converged to accelerate fintech adoption. The JAM trinity—Jan Dhan, Aadhaar, and Mobile—became the cornerstone of India’s fintech revolution, enabling seamless, paperless onboarding and verification for financial services. Aadhaar-enabled biometric authentication, for instance, has transformed how identity verification is conducted, making the process entirely mobile-based. Perhaps the Unified Payments Interface (UPI) is the most profound disruptor. Introduced by the Indian government as part of its push for a cashless economy, UPI has redefined peer-to-peer (P2P) and person-to-merchant (P2M) transactions. As of September 2024, UPI transactions have reached a staggering 15 billion per month, with transaction values surpassing INR 20.6 trillion, marking a 16x increase in volume and a 13x increase in value over five years. UPI’s convenience and speed have made it the default payment mode for millions, further marginalising the role of traditional banking infrastructure. At the same time, blockchain technology is emerging as a force that could dramatically reduce bank operational costs. Decentralised, secure, and transparent, blockchain allows financial institutions to overhaul their legacy systems. 


Bridging the AI Skills Gap: Top Strategies for IT Teams in 2025

Daly explained that practical applications are key to learning, and creating cross-functional teams that include AI experts can facilitate knowledge sharing and the practical application of new skills. "To prepare for 2025 and beyond, it's crucial to integrate AI and ML into the core business strategy beyond R&D investment or technical roles, but also into broader organizational talent development," she said. "This ensures all employees understand the opportunity [and] potential impact, and are trained on responsible use." ... Kayne McGladrey, IEEE senior member and field CISO at Hyperproof, said AI ethics skills are important because they ensure that AI systems are developed and used responsibly, aligning with ethical standards and societal values. "These skills help in identifying and mitigating biases, ensuring transparency, and maintaining accountability in AI operations," he explained. ... Scott Wheeler, cloud practice lead at Asperitas, said building a culture of innovation and continual learning is the first step in closing a skills gap, particularly for newer technologies like AI. "Provide access to learning resources, such as on-demand platforms like Coursera, Udemy, Wizlabs," he suggested. "Embed learning into IT projects by allocating time in the project schedule and monitor and adjust the various programs based on what works or doesn't work for your organization."


What Makes the Ideal Platform Engineer?

Platform engineers decide on a platform — consisting of many different tools, workflows and capabilities — that DevOps, developers and others in the business can use to develop and monitor the development of software. They base these decisions on what will work best for these users. ... The old adage that every business is unique applies here; platform engineering doesn’t look the same in every organization, nor do the platforms or portals that are used. But there are some key responsibilities that platform engineers will often have and skills that they require. Noam Brendel is a DevOps team lead at Checkmarx, an application security firm that has embraced platform engineering. He believes a platform engineer’s focus should be on improving developer excellence. “The perfect platform engineer helps developers by building systems that eliminate bottlenecks and increase collaboration,” he said. ... “Platform engineers need to have a strong understanding of how everything is connected and how the platform is built behind the scenes,” explained Zohar Einy, CEO of Port, a provider of open internal developer portals. He emphasized the importance of knowing how the company’s technical stack is structured and which development tools are used.


Biometrics and AI Knock Out Passwords in the Security Battle

Biometrics and AI-powered authentication have moved beyond concept to successful application. For instance, HSBC's Voice ID voice identification technology analyzes over 100 characteristics of an individual's voice, maintains a sample of the customer's voice, and compares it to the caller's voice. ... The success of implementing biometrics and AI into existing systems relies on organizations to follow best practices. Organizational leaders can assess organizational needs by conducting a security audit to identify vulnerabilities that biometrics and AI can address. This information is then used to create a roadmap for implementation considering budget, resources, and timelines. Involving appropriate staff in such discussions is essential so all stakeholders understand the factors considered in decision-making. Selecting the right technology calls for careful vendor evaluation and identification of solutions that align with the organization's requirements and compliance obligations. Once these decisions are solidified, it is prudent to use pilot programs to start the integration. Small-scale deployments test effectiveness and address any unforeseen issues before large-scale implementation.


CISA, Five Eyes issue hardening guidance for communications infrastructure

The joint guidance is in direct response to the breach of telecommunications infrastructure carried out by the Chinese government-linked hacking collective known as Salt Typhoon. ... “Although tailored to network defenders and engineers of communications infrastructure, this guide may also apply to organizations with on-premises enterprise equipment,” the guidance states. “The authoring agencies encourage telecommunications and other critical infrastructure organizations to apply the best practices in this guide.” “As of this release date,” the guidance says, “identified exploitations or compromises associated with these threat actors’ activity align with existing weaknesses associated with victim infrastructure; no novel activity has been observed. Patching vulnerable devices and services, as well as generally securing environments, will reduce opportunities for intrusion and mitigate the actors’ activity.” Visibility, a cornerstone of network defenses to monitoring, detecting, and understanding activities within their infrastructure, is pivotal in identifying potential threats, vulnerabilities, and anomalous behaviors before they escalate into significant security incidents.


Tackling software vulnerabilities with smarter developer strategies

No two developers solve a problem or build a software product the same way. Some arrive at their career through formal college education, while others are self-taught and with minimal mentorship. Styles and experiences vary wildly. Equally so, we should expect they will consider secure coding practices and guidelines with similar diversity of thought. Organizations must account for this wide diversity in its secure development practices – training, guidelines, standards. These may be foreign concepts to even a highly proficient developer, and we need to give our developers the time and space to learn and ask questions, with sufficient time to develop a secure coding proficiency. ... Best in class organizations have established ‘security champions’ programs where high-skilled developers are empowered to be a team-level resource for secure coding knowledge and best practice in order for institutional knowledge to spread. This is particularly important in remote environments where security teams may be unfamiliar or untrusted faces, and the internal development team leaders are all that much more important to set the tone and direction for adopting a security mindset and applying security principles.


Developing an AI platform for enhanced manufacturing efficiency

To power our AI Platform, we opted for a hybrid architecture that combines our on-premises infrastructure and cloud computing. The first objective was to promote agile development. The hybrid cloud environment, coupled with a microservices-based architecture and agile development methodologies, allowed us to rapidly iterate and deploy new features while maintaining robust security. The path for a microservices architecture arose from the need to flexibly respond to changes in services and libraries, and as part of this shift, our team also adopted a development method called "SCRUM" where we release features incrementally in short cycles of a few weeks, ultimately resulting in streamlined workflows.  ... The second objective is to use resources effectively. The manufacturing floor, where AI models are created, is now also facing strict cost efficiency requirements. With a hybrid cloud approach, we can use on-premises resources during normal operations and scale to the cloud during peak demand, thus reducing GPU usage costs and optimizing performance. This allows us to flexibly adapt to an expected increase in the number of users of AI Platform in the future, as well.


Privacy is a human right, and blockchain is critical to securing It

While blockchain offers decentralized and secure transactions, the lack of privacy on public blockchains can expose users to risks, from theft to persecution. In October, details emerged of one of the largest in-person crypto thefts in US history after a DC man was targeted when kidnappers were able to identify him as an early crypto investor. However, despite the case for on-chain privacy, it’s proven difficult to advance any real-world implementations. Along with the regulatory challenges faced by segments such as privacy coins and mixers, certain high-profile missteps have done little to advance the case for on-chain privacy. Worldcoin, Sam Altman’s much-touted crypto identity project that collected biometric data from users, has also failed to live up to exceptions due to, perversely, concerns from regulators about breaches of users’ data privacy. In August, the government of Kenya suspended Worldcoin’s operations following concerns about data security and consent practices. In October, the company announced it was pivoting away from the EU and towards Asian and Latin American markets, following regulatory wrangling over the European GDPR rules.


Transforming fragmented legacy controls at large banks

You’re not just talking about replacing certain components of a process with technology. There’s also a cost to this change. It’s not always on the top of the list when budgets come around. Usually, spend goes on areas that are revenue generating or more in the innovation space. It can be somewhat of a hard sell to the higher-ups as to why they would spend money to change something, and a lot of organisations aren’t great at articulating the business case for it. ... If you take operational resilience perspective, for example, that’s about being able to get your arms around your important business services, using regulatory language. Considering what is supporting them? What does it take to maintain them, keep them resilient and available, and recover them? The reality is that this used to be infinitely more straightforward. Most of the systems may have been in your own data centre in your own building. Now, the ecosystems that support most of these services are much more complex. You’ve obviously got cloud providers, SaaS providers, and third parties that you’ve outsourced to. You’ve also got a huge number of different services that, even if you’ve bought them and they’re in-house, there are a myriad of internal teams to navigate.


Why the Growing Adoption of IoT Demands Seamless Integration of IT and OT

Effective cybersecurity in OT environments requires a mix of skills and knowledge from both IT and OT teams. This includes professionals from IT infrastructure and cybersecurity, as well as control system engineers, field operations staff, and asset managers typically found in OT. ... The integration of IT and OT through advanced IoT protocols represents a major step forward in securing industrial and healthcare systems. However, this integration introduces significant challenges. I propose a new approach to IoT security that incorporates protocol-agnostic application layer security, lightweight cryptographic algorithms, dynamic key management, and end-to-end encryption, all based on zero-trust network architecture (ZTNA). ... In OT environments, remediation steps must go beyond traditional IT responses. While many IT security measures reset communication links and wipe volatile memory to prevent further compromise, additional processes are needed for identifying, classifying, and investigating cyber threats in OT systems. Furthermore, organizations can benefit from creating unified governance structures and cross-training programs that align the priorities of IT and OT teams. 



Quote for the day:

"There are three secrets to managing. The first secret is have patience. The second is be patient. And the third most important secret is patience." -- Chuck Tanner

Daily Tech Digest - December 12, 2024

The future of AI regulation is up in the air: What’s your next move?

The problem is, Jones says, is that lack of regulations boils down to a lack of accountability, when it comes to what your large language models are doing — and that includes hoovering up intellectual property. Without regulations and legal ramifications, resolving issues of IP theft will either boil down to court cases, or more likely, especially in cases where the LLM belongs to a company with deep pockets, the responsibility will slide downhill to the end users. And when profitability outweighs the risk of a financial hit, some companies are going to push the boundaries. “I think it’s fair to say that the courts aren’t enough, and the fact is that people are going to have to poison their public content to avoid losing their IP,” Jones says. “And it’s sad that it’s going to have to get there, but it’s absolutely going to have to get there if the risk is, you put it on the internet, suddenly somebody’s just ripped off your entire catalog and they’re off selling it directly as well.” ... “These massive weapons of mass destruction, from an AI perspective, they’re phenomenally powerful things. There should be accountability for the control of them,” Jones says. “What it will take to put that accountability onto the companies that create the products, I believe firmly that that’s only going to happen if there’s an impetus for it.”


Leading VPN CCO says digital privacy is "a game of chess we need to play"

Sthanu calls VPNs a first step, and puts forward secure browsers as a second. IPVanish recently launched a secure browser, which is an industry first, and something not offered by other top VPN providers. "It keeps your browser private, blocking tracking, encrypting the sessions, but also protecting your device from any malware," Sthanu said. IPVanish's secure browser utilises the cloud. Session tracking, cookies, and targeting are all eliminated, as web browsing operates in a cloud sandbox. ... Encrypting your data is a vital part of what VPNs do. AES 256-bit and ChaCha20 encryption are currently the standards for the most secure VPNs, which do an excellent job at encrypting and protecting your data. These encryption ciphers can protect you against the vast majority of cyber threats out there right now – but as computers and threats develop, security will need to develop too. Quantum computers are the next stage in computing evolution, and there will come a time, predicted to be in the next five years, where these computers can break 256-bit encryption – this is being referred to as "Q-day." Quantum computers are not readily available at this moment in time, with most found in universities or research labs, but they will become more widespread. 


Kintsugi Leaders: Conservers of talent who convert the weak into winners

Profligate Leaders are not only gluttonous in their appetite for consuming resources, they are usually also choosy about the kind they will order. Not for them the tedious effort of using their own best-selling, training cookbook for seasoning and stirring the youth coming out from the country’s stretched and creaking educational system. On the contrary, they push their HR to queue up for ready-cooked candidates outside the portals of elite institutes on day zero. ... Kintsugi Leaders can create nobility of a different kind if they follow three precepts. The central one is the willingness to bet big and take risks on untried talent. In one of my Group HR roles, eyebrows were raised when I placed the HR leadership of large businesses in the hands of young, internally groomed talent instead of picking stars from the market. ... There is a third (albeit rare) kind of HR leader: the Trusted Transformer who can convert a Profligate Leader into the Kintsugi kind. Revealable corporate examples are thin on the ground. In keeping with the Kintsugi theme, then, I have to fall back on Japan. Itō Hirobumi had a profound influence on Emperor Meiji and played a pivotal role in shaping the political landscape of Meiji-era Japan.


4 North Star Metrics for Platform Engineering Teams

“Acknowledging that DORA, SPACE and DevEx provide different slivers or different perspectives into the problem, our goal was to create a framework that encapsulates all the frameworks,” Noda said, “like one framework to rule them all, that is prescriptive and encapsulates all the existing knowledge and research we have.” DORA metrics don’t mean much at the team level, but, he continued, developer satisfaction — a key measurement of platform engineering success — doesn’t matter to a CFO. “There’s a very intentional goal of making especially the key metrics, but really all the metrics, meaningful to all stakeholders, including managers,” Noda said. “That enables the organization to create a single, shared and aligned definition of productivity so everyone can row in the same direction.” The Core 4 key metrics are:An average of diffs per engineer is used to measure speed. The Developer Experience Index, or homegrown developer experience surveys, is used to measure effectiveness. A change failure rate is used to measure quality. The percentage of time spent on new capabilities to measure impact. DX’s own DXI, which uses a standardized set of 14 Likert-scale questions — from strongly agree to strongly disagree — is currently only available to DX users.


The future of data: A 5-pillar approach to modern data management

To succeed in today’s landscape, every company — small, mid-sized or large — must embrace a data-centric mindset. This article proposes a methodology for organizations to implement a modern data management function that can be tailored to meet their unique needs. By “modern”, I refer to an engineering-driven methodology that fully capitalizes on automation and software engineering best practices. This approach is repeatable, minimizes dependence on manual controls, harnesses technology and AI for data management and integrates seamlessly into the digital product development process. ... Unlike the technology-focused Data Platform pillar, Data Engineering concentrates on building distributed parallel data pipelines with embedded business rules. It is crucial to remember that business needs should drive the pipeline configuration, not the other way around. For example, if preserving the order of events is essential for business needs, the appropriate batch, micro-batch or streaming configuration must be implemented to meet these requirements. Another key area involves managing the operational health of data pipelines, with an even greater emphasis on monitoring the quality of the data flowing through the pipeline. 


How and Why the Developer-First Approach Is Changing the Observability Landscape

First and foremost, developers aim to avoid issues altogether. They seek modern observability solutions that can prevent problems before they occur. This goes beyond merely monitoring metrics: it encompasses the entire software development lifecycle (SDLC) and every stage of development within the organization. Production issues don't begin with a sudden surge in traffic; they originate much earlier when developers first implement their solutions. Issues begin to surface as these solutions are deployed to production and customers start using them. Observability solutions must shift to monitoring all the aspects of SDLC and all the activities that happen throughout the development pipeline. This includes the production code and how it’s running, but also the CI/CD pipeline, development activities, and every single test executed against the database. Second, developers deal with hundreds of applications each day. They can’t waste their time manually tuning alerting for each application separately. The monitoring solutions must automatically detect anomalies, fix issues before they happen, and tune the alarms based on the real traffic. They shouldn’t raise alarms based on hard limits like 80% of the CPU load.


We must adjust expectations for the CISO role

The sense of vulnerability CISOs feel today is compounded by a shifting accountability model in the boardroom. As cybersecurity incidents make front-page news more frequently, boards and executive teams are paying closer attention. This increased scrutiny is a double-edged sword: on the one hand, it can mean greater support and resources; on the other, it often translates to CISOs being in the proverbial hot seat. What’s more, cybersecurity is still a rapidly evolving field with few long-standing best practices. It’s a space marked by constant adaptation, bringing a certain degree of trial and error. When an error occurs—especially one that leads to a breach—the CISO’s role is scrutinized. While the entire organization might have a role in cybersecurity, CISOs are often expected to bear the brunt of accountability. This dynamic is unsettling for many in the position, and the 99% of CISOs who fear for their job security in the event of a breach clearly illustrates this point. So, what can be done? Both organizations and CISOs are responsible for recalibrating expectations and addressing the root causes of these pervasive job security fears. For organizations, a starting point is to shift cybersecurity from a reactive to a proactive stance. Investing in continuous improvement—whether through advanced security technologies, employee training, or cyber insurance—is crucial.


Bug bounty programs can deliver significant benefits, but only if you’re ready

The most significant benefit of a bug bounty program is finding vulnerabilities an organization might not have otherwise discovered. “A bug bounty program gives you another avenue of identifying vulnerabilities that you’re not finding through other processes,” such as internal vulnerability scans, Stefanie Bartak, associate director of the vulnerability management team at NCC Group, tells CSO. Establishing a bug bounty program signals to the broader security research community that an organization is serious about fixing bugs. “For an enterprise, it’s a really good way for researchers, or anyone, to be able to contact them and report something that may not be right in their security,” Louis Nyffenegger, CEO of PentesterLab, tells CSO. Moreover, a bug bounty program will offer an organization a wider array of talent to bring perspectives that in-house personnel don’t have. “You get access to a large community of diverse thinkers, which help you find vulnerabilities you may otherwise not get good access to,” Synack’s Lance says. “That diversity of thought can’t be underestimated. Diversity of thought and diversity of researchers is a big benefit. You get a more hardened environment because you get better or additional testing in some cases.”


Harnessing SaaS to elevate your digital transformation journey

The impact of AI-driven SaaS solutions can be seen across multiple industries. In retail, AI-powered SaaS platforms enable businesses to analyze consumer behavior in real-time, providing personalized recommendations that drive sales. In manufacturing, AI optimizes supply chain management, reducing waste and increasing productivity. In the finance sector, AI-driven SaaS automates risk assessment, improving decision-making and reducing operational costs. ... As businesses continue to adopt SaaS and AI-driven solutions, the future of digital transformation looks promising. Companies are no longer just thinking about automating processes or improving efficiency, they are investing in technologies that will help them shape the future of their industries. From developing the next generation of products to understanding their customers better, SaaS and AI are at the heart of this evolution. CTOs, like myself, are now not only responsible for technological innovation but are also seen as key contributors to shaping the company’s overall business strategy. This shift in leadership focus will be critical in helping organizations navigate the challenges and opportunities of digital transformation. By leveraging AI and SaaS, we can build scalable, efficient, and innovative systems that will drive growth for years to come. 


What makes product teams effective?

More enterprises are adopting a cross-functional team model, yet many still tend to underinvest in product management. While they make sure to fill the product owner role—a person accountable for translating business needs into technology requirements—they do not always choose the right individual for the product manager role. Effective product managers are business leaders with the mindset and technical skills to guide multiple product teams simultaneously. They shape product strategy, define requirements, and uphold the bar on delivery quality, usually partnering with an engineering or technology lead in a two-in-a-box model. ... Unsurprisingly, when organizations recognize individual expertise, provide options for career progression, and base promotions on capabilities, employees are more engaged and satisfied with their teams. Similarly, by standardizing and reducing the overall number of roles, organizations naturally shift to a balanced ratio of orchestrators (minority) to doers (majority), which increases team capacity without hiring more employees. This shift helps ensure teams can meet their delivery commitments and creates a transparent environment where individuals feel empowered and informed.



Quote for the day:

“Things come to those who wait, but only the things left by those who hustle” -- Abraham Lincoln

Daily Tech Digest - December 11, 2024

Low-tech solutions to high-tech cybercrimes

The growing quality of deepfakes, including real-time deepfakes during live video calls, invites scammers, criminals, and even state-sponsored attackers to convincingly bypass security measures and steal identities for all kinds of nefarious purposes. AI-enabled voice cloning has already proved to be a massive boon for phone-related identity theft. AI enables malicious actors to bypass face recognition. protection And AI-powered bots are being deployed to intercept and use one-time passwords in real time. More broadly, AI can accelerate and automate just about any cyberattack. ... Once established (not in writing… ), the secret word can serve as a fast, powerful way to instantly identify someone. And because it’s not digital or stored anywhere on the Internet, it can’t be stolen. So if your “boss” or your spouse calls you to ask you for data or to transfer funds, you can ask for the secret word to verify it’s really them. ... Farrow emphasizes a simple way to foil spyware: reboot your phone every day. He points out that most spyware is purged with a reboot. So rebooting every day makes sure that no spyware remains on your phone. He also stresses the importance of keeping your OS and apps updated to the latest version.


7 Essential Trends IT Departments Must Tackle In 2025

Taking responsibility for cybersecurity will remain a key function of IT departments in 2025 as organizations face off against increasingly sophisticated and frequent attacks. Even as businesses come to understand that everyone from the boardroom to the shop floor has a part to play in preventing attacks, IT teams will inevitably be on the front line, with the job of securing networks, managing update and installation schedules, administering access protocols and implementing zero-trust measures. ... In 2025, AIOps are critical to enabling businesses to benefit from real-time resource optimization, automated decision-making and predictive incident resolution. This should empower the entire workforce, from marketing to manufacturing, to focus on innovation and high-value tasks rather than repetitive technical work best left to machines. ... with technology functions playing an increasingly integral role in business growth, other C-level roles have emerged to take on some of the responsibilities. As well as Chief Data Officers (CDOs) and Chief Information Security Officers (CISOs), it’s increasingly common for organizations to appoint Chief AI Officers (CIAOs), and as the role of technology in organizations continues to evolve, more C-level positions are likely to become critical.


Passkey adoption by Australian govt, banks drives wider passwordless authentication

“A key change has been to the operation of the security protocols that underpin passkeys and passwordless authentication. As this has improved over time, it has engendered more trust in the technology among technology teams and organisations, leading to increased adoption and use.” “At the same time, users have become more comfortable with biometrics to authenticate to digital services.” Implementation and enablement have also improved, leveraging templates and no-code, drag-and-drop orchestration to “allow administrators to swiftly design, test and deploy various out-of-the-box passwordless registration and authentication experiences for diverse customer identity types, all at scale, with minimal manual setup.” ... Banks are among the major drivers of passkey adoption in Australia. According to an article in the Sydney Morning Herald, National Australia Bank (NAB) chief security officer Sandro Bucchianeri says passwords are “terrible” – and on the way out. ... Specific questions pertaining to passkeys include, “Do you agree or disagree with including use of a passkey as an alternative first-factor identity authentication process?” and “Does it pose any security or fraud risks? If so, please describe these in detail.”


Why crisis simulations fail and how to fix them

Communication gaps are particularly common between technical leadership and business executives. These teams work in silos, which often causes misalignment and miscommunication. Technical staff use jargon that executives don’t fully understand, while business priorities may be unclear to the technical team. As a result, it becomes difficult to discern what requires immediate attention and communication versus what constitutes noise. This slows down critical decisions. Now throw in third-party vendors or MSPs, and this just amplifies the confusion and adds to the chaos. Role confusion is an interesting challenge. Crisis management playbooks typically have roles assigned to tasks, but no detail on what these roles mean. I have seen teams come into an exercise confident about the name of their role, but no idea what the role means in terms of actual execution. Many times, teams don’t even know that a role exists within the team or who owns it. A fitting example is a “crisis simulation secretary” — someone tasked with recording the notes for the meetings, scheduling the calls, making sure everyone has the correct numbers to dial in, etc. This may seem trivial, but it is a critical role, as you do not want to waste precious minutes trying to dial into a call. 


What CIOs are in for with the EU’s Data Act

There are many things the CIO will have to perform in light of Data Act provisions. In the meantime, as explained by Perugini, CIOs must do due diligence on the data their companies collect from connected devices and understand where they are in the value chain — whether they are the owners, users, or recipients. “If the company produces a connected industrial machine and gives it to a customer and then maintains the machine, it finds itself collecting the data as the owner,” she says. “If the company is a customer of the machine, it’s a user and co-generates the data. But if it’s a company that acquires the data of the machine, it’s a recipient because the user or the manufacturer has allowed it to make them available or participates in a data marketplace. CIOs can also see if there’s data generated by others on the market that can be used for internal analysis, and procure it. Any use or exchange of data must be regulated by an agreement between the interested parties with contracts.” The CIO will also have to evaluate contracts with suppliers, ensuring terms are compliant, and negotiate with suppliers to access data in a direct and interoperable way. Plus, the CIO has to evaluate whether the company’s IT infrastructure is suitable to guarantee interoperability and security of data as per GDPR. 


How slowing down can accelerate your startup’s growth

WIn startup culture, there’s a pervasive pressure to say “yes” to every opportunity, to grow at all costs. But I’ve learned that restraint is an underrated virtue in business. At Aloha, we had to make tough choices to stay on the path of sustainable growth. We focused on our core mission and turned down attractive but potentially distracting opportunities that would have taken resources away from what mattered most. ... One of the most persistent traps for startups is the “growth at all costs” mindset. Top-line growth can be impressive, but if it’s achieved without a path to profitability, it’s a house of cards. When I joined Aloha, we refocused our efforts on creating a financially sustainable business. This meant dialing back on some of our expansion plans to ensure we were growing within our means. ... In a world that worships speed, it takes courage to slow down. It’s not easy to resist the siren call of hypergrowth. But when you do, you create the conditions for a business that can weather storms, adapt to change, and keep thriving. Building a company on these principles doesn’t mean abandoning growth—it means ensuring that growth is meaningful and sustainable. Slow and steady may not be glamorous, but it works. 


Why business teams must stay out of application development

Citizen development is when non-tech users build business applications using no-code/low-code platforms, which automate code generation. Imagine that you need a simple leave application tool within the organization. Enterprises can’t afford to deploy their busy and expensive professional resources to build an internal tool. So, they go the citizen development way. ... Proponents of citizen development argue that the apps built with low-code platforms are highly customizable. What they mean is that they have the ability to mix and match elements and change colors. For enterprise apps, this is all in a day’s work. True customizability comes from real editable code that empowers developers to hand-code parts to handle complex and edge cases. Business users cannot build these types of features because low-code platforms themselves are not designed to handle this. ... Finally, the most important loophole that citizen development creates is security. A vast majority of security attacks happen due to human error, such as phishing scams, downloading ransomware, or improper credential management. In fact, IBM found that there has been a 71% increase this year in cyberattacks that used stolen or compromised credentials.


The rise of observability: A new era in IT Operations

Observability empowers organisations to not just detect that a problem exists, but to understand why it’s happening and how to resolve it. It’s the difference between knowing that a car has broken down and having a detailed diagnostic report that pinpoints the exact issue and suggests an effective repair. The transition from monitoring to observability is not without its challenges. Some organisations find themselves struggling with legacy systems and entrenched processes that resist change. Observability represents a shift from traditional IT operations, requiring a new mindset and skill set. However, the benefits of implementing observability practices far outweigh the initial challenges. While there may be concerns about skill gaps, modern observability platforms are designed to be user-friendly and accessible to team members at all levels. ... Implementing observability results in clear, measurable benefits, especially around improved service reliability. Because teams can identify and resolve issues quickly and proactively, downtime is minimised or eradicated. Enhanced reliability leads to better customer experiences, which is a crucial differentiator in a competitive market where user satisfaction is key.


5 Trends Reshaping the Data Landscape

With increased interest in generative AI and predictive AI, as well as supporting traditional analytical workloads, “we’re seeing a pretty massive increase of data sprawl across industries,” he observed. “They track with the realization among many of our customers that they’ve created a lot of different versions of the truth and silos of data which have different systems, both on-prem and in the cloud.” ... If a data team “can’t get the data where it needs to go, they’re not going to be able to analyze it in an efficient, secure way,” he said. “Leaders have to think about scale in new ways. There are so many systems downstream that consume data. Scaling these environments as the data is growing in many cases by almost double-digit percentages year over year is becoming unwieldy.” A proactive approach is to address these costs and silos through streamlining and simplification on a single common platform, Kethireddy urged, noting Ocient’s approach to “take the path to reducing the amount of hardware and cloud instances it takes to analyze compute-intensive workloads. We focus on minimizing costs associated with the system footprint and energy consumption.”


Serverless Computing: The Future of Programming and Application Deployment Innovations

Serverless computing enhances automated scaling for handling workload by shifting developers' focus on code development by adding and removing instances from serverless functions. This approach leads cloud providers to automate the distribution of incoming traffic from interconnected multiple instances in serverless functions. The scalability nature of serverless computing emphasizes that developers should build applications for handling large volumes of traffic with an effective cloud infrastructure environment. On the other hand, serverless functions assist in limited time within the range of milliseconds to several minutes by optimization of the application code in performance management. ... Cloud providers integrated security features of encryption and access control in infrastructure in cloud services. This measure applied automated security updates and patches in infrastructure with rapid prototype creation. However, serverless computing issues in cloud infrastructure reflect cloud services negatively. The time is taken to respond for the first time when a serverless function has been initiated. The constraints of a serverless architecture reflect a limited function lifecycle, which drastically affects its performance.
 


Quote for the day:

"If you want to be successful prepare to be doubted and tested." -- @PilotSpeaker

Daily Tech Digest - December 08, 2024

Here’s the one thing you should never outsource to an AI model

One of the biggest dangers in letting AI take the reins of your product ideation process is that AI processes content — be it designs, solutions or technical configurations — in ways that lead to convergence rather than divergence. Given the overlapping bases of training data, AI-driven R&D will result in homogenized products across the market. Yes, different flavors of the same concept, but still the same concept. Imagine this: Four of your competitors implement gen AI systems to design their phones’ user interfaces (UIs). Each system is trained on more or less the same corpus of information — data scraped from the web about consumer preferences, existing designs, bestseller products and so on. What do all those AI systems produce? Variations of a similar result. What you’ll see develop over time is a disturbing visual and conceptual cohesion where rival products start mirroring one another. ... In platforms like ArtStation, many artists have raised concerns regarding the influx of AI-produced content that, instead of showing unique human creativity, feels like recycled aesthetics remixing popular cultural references, broad visual tropes and styles. This is not the cutting-edge innovation you want powering your R&D engine.


How much capacity is in aging data centers?

Individual data centers have considerable differences between them, and one of the most critical is their size. With this weighting factor, the average moves — but not by much. The “average megawatt” is 10.2 years old. Whereas older data centers (10-plus years) represent 48 percent of the survey sample, they contain 38 percent of the total IT capacity — still a large minority. Interestingly, a more dramatic shift occurs within the population of data centers that have been operating for less than 10 years — well within the typical design lifespan. By facility count alone, there is an even split between the data centers that are one to five years old and those that have been in operation for six to ten years. But when measuring in megawatts, the newest data centers hold significantly more capacity (38 percent) than those with six to ten years of service. This is intuitive; in the past five years, some data center projects have reached unprecedented sizes. Very recent builds are overshadowing the capacity of data centers that are only slightly older, even though the designs are not dramatically different. However, the weighted figures above suggest that even this massive build-out has not yet overcome the moderating influence of much older, potentially less efficient facilities.


Generative AI is making traditional ways to measure business success obsolete

Often touted as the “iron triangle” from the perspective of operational efficiency, this equation implies that, in order to attain a degree of quality, firms must balance cost with the time spent to achieve that level of quality. ... AI has upended this thinking, as firms can now achieve both speed and accuracy at the same time by leveraging AI. This can enhance productivity and drive innovation without losing out on quality. Likewise, through generative AI, smaller companies with fewer resources are able to rub shoulders and compete with larger firms using AI-powered tools. They can do this by streamlining operations, creating cost-effective marketing content and delivering personalised customer experiences. This can make existing businesses more efficient, competitive and creative. It can also lower the barriers to entry into markets for prospective small and medium-sized business owners. ... The UK government’s recent autumn budget included a number of tax rises that will hit businesses, especially some small and medium-sized enterprises (SMEs) that don’t have the financial buffers to weather severe economic challenges. Generative AI has reconfigured the Cost x Time = Quality formula and has enabled firms to do things both quickly and accurately without a trade off.


UK Cyber Risks Are ‘Widely Underestimated,’ Warns Country’s Security Chief

“What has struck me more forcefully than anything else since taking the helm at the NCSC is the clearly widening gap between the exposure and threat we face, and the defences that are in place to protect us,” he said. “And what is equally clear to me is that we all need to increase the pace we are working at to keep ahead of our adversaries.”  ... Horne added that the guidance and frameworks drawn up by the NCSC are not widely used. Ultimately, businesses need to change their perspective on cyber security from a “necessary evil” or “compliance function” to “an integral part of achieving their purpose.” ... “The defence and resilience of critical infrastructure, supply chains, the public sector and our wider economy must improve” to protect against these nation-state threats, Horne said. Ian Birdsey, partner and cyber specialist at law firm Clyde & Co, told TechRepublic in an email: “The UK has increasingly become a target for hostile nations due to the redrawing of geopolitical battle lines and the rise in global conflicts in recent years. In turn, threat actors based in those territories are increasingly launching more severe and sophisticated cyberattacks on UK organisations, particularly within critical national infrastructure and its supply chain.


5 JavaScript Libraries You Should Say Goodbye to in 2025

jQuery is the grandparent of modern JavaScript libraries, loved for its cross-browser support, simple DOM manipulation, and concise syntax. However, in 2025, it’s time to officially let go. Native JavaScript APIs and modern frameworks like React, Vue, and Angular have rendered jQuery’s core utilities obsolete. Not to mention, vanilla JavaScript now includes native methods such as querySelector, addEventListener, and fetch that more conveniently provide the functionality we once relied on jQuery to deliver. Also, modern browsers have standardized, making the need for a cross-browser solution like jQuery redundant. Not to mention, bundling jQuery into an application today can add unnecessary bloat, slowing down load times in an age when speed is king. ... Moment.js was the default date-handling library for a long time, and it was celebrated for its ability to parse, validate, manipulate, and display dates. However, it’s now heavy and inflexible compared to newer alternatives, not to mention it’s been deprecated. Moment.js clocks in at around 66 KB (minified), which can be a significant payload in an era where smaller bundle sizes lead to faster performance and better UX.


How media, publishing and entertainment organizations can master Data Governance in the age of AI

One of the reasons AI governance has proven to be such a challenging new discipline is that it’s so multifaceted. Tiankai explained that it’s comprised of several key elements: Ownership and stewardship: AI models need ownership, and so does AI governance. The right people must be accountable for ensuring AI models are used in the right ways. Cross-functional decision-making: A cross-domain thinking and decision-making model is essential. One central function can’t make every AI-relevant governance decision, so you need ways to bring the accountable people together. Processes and metadata: Teams must make their models explainable, so everyone can understand the quality of their outputs and the root causes of any negative outcomes. Technology enablement: Technology must support governance frameworks and make them work at scale. This shows that AI governance requires a combination of people, process and technology change. The panel agreed that the ‘people’ element is the toughest to manage effectively. Nathalie Berdat, Head of Data and AI Governance, BBC, explained some of the people-specific challenges that she has encountered along its AI governance journey. 


5 ways to tell people what to do at work

Nick Woods, CIO of airport group MAG, said dialogue is the priority for any professional who wants to avoid ambiguity. "If you're telling somebody what to do, you're already in the wrong place," he said. "Success is about a coaching, conversational dialogue that you need to have that ultimately comes down to a handshake on, 'Are we clear on what's next?'" Woods told ZDNET that most management decisions involve an ongoing debate. He doesn't believe in being directive about outputs and telling people what they need to go and do. "I think I'm much more in a space of, 'Actually, I've hired good people. I'm going to allow you to go and tell me what we need to do, and then we're going to have a dialogue about it,'" he said. ... Niall Robinson, head of product innovation at the Met Office, said talented staff should be given space to express their creativity. "There's a temptation as a leader to tell people how to do stuff -- and that can be a trap," he said. Robinson told ZDNET that he focuses on avoiding that problem by trusting his staff to generate recommended actions. "A habit I've been trying to practice is to tell people what success looks like and then giving them the agency to describe the options to me because they're closer to many of the solutions. So, success is about giving people the power to advise me."


Navigating NextGen Enterprise Architecture with GenAI

GenAI can modernize technology architecture by facilitating optimal best-of-breed solutions selection based on diverse criteria deep analyses. It offers tailored guidance aligned with business requirements as well as key capabilities such as scalability, resilience, and reversibility. This dynamic capacity adapts to evolving IT landscapes and business requirements, continuously refining recommendations based on the changing need and technological state-of-art. Moreover, GenAI accelerates homemade solutions development by generating code snippets. It produces-free functions and classes code segments written in any programming language, which improves efficiency and reduces manual coding efforts. This capacity improves developers' productivity and allows teams to focus more on high-level design. It also ensures that generated code is aligned with coding standards related to maintainability, readability, collaboration, and consistency. GenAI has amazing advantages, but it also has some major challenges. One of them is sustainability issues, which are increasingly important in technology adoption. In fact, many enterprises take this criterion into account in their technology architecture principles and assess it when they select a new solution to enhance their IT landscape.


The 7 R's of cloud migration: How to choose the right method

The R's model isn't new, but it has evolved significantly over the years. Its genesis is usually attributed to Gartner, who came up with the 5 R's model back in 2010. The original five were rehost, refactor, revise, rebuild and replace. As the cloud continued to evolve and more diverse workloads were being migrated to the cloud, AWS added a sixth R -- retire -- and eventually, a seventh, for retain. This seventh R is effectively an acknowledgment that not all workloads are suited to being hosted in the cloud. ... Rehosting can be done in a few ways, but it often means creating cloud-based virtual machines that mimic the infrastructure an application is currently running on. ... Rehosting an application requires you to create a cloud VM instance and then move the application onto that instance. Relocating, on the other hand, involves moving an existing VM from an on-premises environment to the cloud without making significant changes to it. ... A workload might be suitable for retirement if it is no longer actively supported by the vendor. In such cases, it's important to make sure you have a workaround before retiring an application the organization still uses. That might mean adopting a competing application that offers similar functionality or developing one in-house.


Evolving Your Architecture: Essential Steps and Tools for Modernization

Tech debt, lack of modernization can also get you out there in the news, and not as a very good thing, as we could see for SWA a couple years ago when they had a pretty huge meltdown with their booking systems and all that. It damaged their image, but also got them pretty down on their plans in revenue and all that, and still, nowadays they are facing the consequences of that meltdown, which was basically because of ignoring and putting aside the conversations about tech debt and application modernization as a whole. ... It's basically looking at the inventory of applications that you have in your organization, and understanding, what are the critical ones? What is the value that it adds? Alignment with the business goals. Really like, is it commodity? Can I just go and buy one out of the shelf, two? Then it's fine, go and buy it. If it's something that differentiates you, you got to innovate, then it might be worth looking at building it and hence modernizing it. ... The other thing is the age of technology. If you have outdated technology, you very likely have vulnerabilities. If you have lack of support, either from the community or the vendors, there is a security vulnerability there, but there is no security patch being released because there is no support anymore.



Quote for the day:

"Do something today that your future self will thank you for." -- Unknown

Daily Tech Digest - December 07, 2024

In the recent past, people had the perception that HDD storage is slow and can only be used for backup. However, in the last 2 years, we have demonstrated in our European HDD laboratory how to combine multiple HDDs to test function and performance. If you have 100s of HDDs in your large-scale storage system, you also have around a billion different configuration possibilities. ... The demand for HDDs in surveillance applications continues to surge, with an increasing number of digital video recorder manufacturers entering the market. From relatively cheap surveillance systems for private homes, to medium priced surveillance systems to expensive surveillance systems for large-scale infrastructures like smart cities. The sequential nature of video surveillance data and the fact that it is over-written at some point in time, makes HDDs the uncontested choice at all levels for surveillance storage. ... At the very least, preserving a duplicate of one’s data using an alternative technology is a sensible measure. This could be a combination of cloud services or a mix of cloud and external storage, such as a USB-connected portable HDD like a Toshiba Canvio. It’s a small price to pay for peace of mind that your data is safe.


Top 3 Strategies for Leveraging AI to Transform Customer Intelligence

Transitioning from reactive to proactive engagement is one of AI's most transformative capabilities for customer intelligence. Predictive models trained on historical data allow organizations to anticipate customer needs, helping them deliver timely, relevant solutions. By recognizing patterns and trends, AI empowers businesses to forecast future customer actions — whether that's product preferences, the likelihood of churn, or upcoming purchase intent — enabling a more proactive approach to customer engagement. ... AI enables companies to personalize customer interactions dynamically across multiple channels. For instance, AI-powered chatbots can provide instant responses, creating a conversational experience that feels natural and responsive. By integrating these capabilities into CRM systems, companies ensure that every customer touchpoint — chat, email, or in-app messaging — is customized based on a customer's unique history and recent activities. This focus on personalization also extends to effective customer segmentation, as organizations aim to provide the right level of service to each customer based on their specific needs and entitlements.


Who’s the Bigger Villain? Data Debt vs. Technical Debt

Although data debt and tech debt are closely connected, there is a key distinction between them: you can declare bankruptcy on tech debt and start over, but doing the same with data debt is rarely an option. Reckless and unintentional data debt emerged from cheaper storage costs and a data-hoarding culture, where organizations amassed large volumes of data without establishing proper structures or ensuring shared context and meaning. It was further fueled by resistance to a design-first approach, often dismissed as a potential bottleneck to speed. ... With data debt, prevention is better than relying on a cure. Shift left is a practice that involves addressing critical processes earlier in the development lifecycle to identify and resolve issues before they grow into more significant problems. Applied to data management, shift left emphasizes prioritizing data modeling early, if possible — before data is collected or systems are built. Data modeling allows for following a design-first approach, where data structure, meaning, and relationships are thoughtfully planned and discussed before collection. This approach reduces data debt by ensuring clarity, consistency, and alignment across teams, enabling easier integration, analysis, and long-term value from the data.


Understanding NVMe RAID Mode: Unlocking Faster Storage Performance

While NVMe RAID mode offers excellent benefits, it’s not without its challenges. One of the most significant hurdles is the complexity of setting it up. RAID arrays, particularly with NVMe drives, require specialized hardware or software RAID controllers. Additionally, configuring RAID in the BIOS or UEFI settings can be tricky for less experienced users. Another challenge is cost. NVMe SSDs, while dropping in price over the years, are still generally more expensive than traditional SATA-based drives. Combining multiple NVMe drives into a RAID array can significantly increase the cost of the storage solution. For users on a budget, this might not be the most cost-effective option. Finally, RAID configurations that emphasize performance, like RAID 0, do not provide any data redundancy. If one drive fails, all data in the array is lost. ... NVMe RAID mode is ideal for users who need extremely fast read and write speeds, high storage capacity, and, in some cases, redundancy. This includes professionals who work with large video files, developers running complex simulations, and enthusiasts building high-end gaming PCs. Additionally, businesses that rely on fast access to large databases or those that run virtual machines may benefit from NVMe RAID configurations.


Supply chain compromise of Ultralytics AI library results in trojanized versions

According to researchers from ReversingLabs, the attackers leveraged a known exploit via GitHub Actions to introduce malicious code during the automated build process, therefore bypassing the usual code review process. As a result, the code was present only in the package pushed to PyPI and not in the code repository on GitHub. The trojanized version of Ultralytics on PyPI (8.3.41) was published on Dec. 4. Ultralytics developers were alerted Dec. 5, and attempted to push a new version (8.3.42) to resolve the issue, but because they didn’t initially understand the source of the compromise, this version ended up including the rogue code as well. A clean and safe version (8.3.43) was eventually published on the same day. ... According to ReversingLabs’ analysis of the malicious code, the attacker modified two files: downloads.py and model.py. The code injected in model.py checks the type of machine where the package is deployed to download a payload targeted for that platform and CPU architecture. The rogue code that performs the payload download is stored in downloads.py. “While in this case, based on the present information the RL research team has, it seems that the malicious payload served was simply an XMRig miner, and that the malicious functionality was aimed at cryptocurrency mining,” ReversingLabs’ researchers wrote. 


Data Governance Defying Gravitas

When it comes to formalizing data governance in a complex organization, there’s often an expectation of gravitas — a sense of seriousness, authority, and weight that makes the effort seem formidable and unyielding. But let’s be honest: Too much gravitas can weigh down your data governance program before it even begins. Enter the Non-Invasive Data Governance approach, which flips the script on gravitas by delivering effectiveness without the unnecessary posturing, proving that you can have impact without the drama. ... Complex organizations are not static, and neither should their data governance approach be. NIDG defies the traditional concept of gravitas by embracing adaptability. While other frameworks crumble under the weight of organizational change, NIDG thrives in dynamic environments. It’s built to flex and evolve, ensuring governance remains effective as technologies, priorities, and personnel shift. This adaptability fosters a sense of trust. People know that NIDG isn’t a rigid set of rules, but a living framework designed to support their needs. It’s this trust that gives NIDG its gravitas — not the false authority of inflexible mandates, but the real authority that comes from being a program people believe in and rely on. 


Weaponized AI: Hot for Fraud, Not for Election Interference

"Criminals use AI-generated text to appear believable to a reader in furtherance of social engineering, spear phishing and financial fraud schemes such as romance, investment and other confidence schemes, or to overcome common indicators of fraud schemes," it said. More advanced use cases investigated by law enforcement include criminals using AI-generated audio clips to fool banks into granting them access to accounts, or using "a loved one's voice to impersonate a close relative in a crisis situation, asking for immediate financial assistance or demanding a ransom," the bureau warned. Key defenses against such attacks, the FBI said, include creating "a secret word or phrase with your family to verify their identity," which can also work well in business settings - for example, as part of a more robust defense against CEO fraud (see: Top Cyber Extortion Defenses for Battling Virtual Kidnappers). Many fraudsters attempt to exploit victims before they have time to pause and think. Accordingly, never hesitate to hang up the phone, independently find a phone number for a caller's supposed organization, and contact them directly, it said.


Data Assurance Changes How We Network

Today, the simplest way to control the path data takes between two points is to use a private network (leased lines, for example). But today’s private networks are extremely expensive and don’t offer much in the way of visibility. They also take months to provision, which slows business agility. Even with MPLS, IGP shortest path routing will always follow the shortest IGP path. If alternate paths are available, traffic engineering (TE) with segment routing (SR) can utilize non-shortest paths. However, if the decision is made within the Provider Edge (PE) router in the service provider's network, it will necessitate source-based routing, which is not sustainable due to the challenges of implementing source routing on a per-customer basis within the service provider network. This approach will not scale effectively in an MPLS environment, and moreover, 99% of MPLS private networks do not encrypt traffic, leading to significant performance and scalability issues. Another option is to move your operations to a public cloud that can guarantee you meet data assurance goals. This, too, can be prohibitively expensive and also lacks visibility. 


Spotting the Charlatans: Red Flags for Enterprise Security Teams

Sadly, by the time most people catch on that there is a charlatan in the team, grave damage has been done to both the morale and progress of the security team. That being said, there are some clues that charlatans leave behind from time to time. If we are astute and perceptive, we can pick up on these clues and work to contain the damage that charlatans cause. ... Most talented security professionals I’ve worked with have a healthy amount of self-doubt and insecurity. This is completely normal, of course. Charlatans take advantage of this, cutting down talented professionals that they see as a threat. This causes those targeted to recoil in a moment of thought and introspection, which is all the charlatan needs to retake the spotlight. ... One of the strategies of a charlatan is to throw their perceived threat off their game. One way in which they do this is by taking pot shots. Charlatans throw subtle slights, passive-aggressive insults, and unpredictable surprises at their targets. If the targeted individual reacts to the tactic or calls the charlatan out, the target then seems like the aggressor. The best response is to ignore the pot shots and try to stay focused. In many cases, when the charlatan realizes they cannot rattle you, they will slowly lose interest.


Why ICS Cybersecurity Regulations Are Essential for Industrial Resilience

As the cybersecurity landscape becomes increasingly complex, industrial companies, especially those managing industrial control systems (ICS), face heightened risks. From protecting sensitive data to safeguarding critical infrastructure, compliance with cybersecurity regulations has become essential. Here, we explore why ICS cybersecurity is crucial, the risks involved, and key steps organizations can take to meet regulatory demands without compromising operational efficiency. ... Cybersecurity risks are no longer a secondary concern but a primary focus, especially for industries managing critical infrastructure such as energy, water, and transportation. Cyber threats targeting ICS environments have become more sophisticated, posing risks not only to individual companies but also to the broader economy and society. Regulatory adherence ensures these vulnerabilities are managed systematically, reducing potential downtime, data breaches, and even physical threats. ... Cybersecurity in ICS environments isn’t merely about meeting regulatory requirements; it’s a strategic priority that protects both assets and people. By focusing on identity management, automating updates, aligning with industry standards, and bridging IT-OT security gaps, organizations can enhance resilience against emerging threats.



Quote for the day:

“Identify your problems but give your power and energy to solutions.” -- Tony Robbins