Daily Tech Digest - December 15, 2024

Navigating the Future: Cloud Migration Journeys and Data Security

To meet the requirements of DORA and future regulations, business leaders must adopt a proactive and reflexive approach to cybersecurity. Strong cyber hygiene practices must be integrated throughout the business, ensuring consistency in how data is handled, protected, and accessed. It is important to note at this juncture that enhanced data security isn’t purely focused on compliance. Modern IT researchers and business analysts have been studying what differentiates the most innovative companies for decades and have identified two key principles that help businesses achieve this: Unified Control and Federated Protection. ... Advancements in data security technologies are reshaping the cloud landscape, enabling faster and more secure migrations. Privacy Enhancing Technologies (PETs) like dynamic data masking (DDM), tokenisation, and format-preserving encryption help businesses anonymise sensitive data, reducing breach risks while keeping cloud adoption fast and flexible. However, as businesses will inevitably adopt multi-cloud strategies to support their processes, they will require interoperable security platforms that can seamlessly integrate across multiple cloud environments. 


Maximizing AI Payoff in Banking Will Demand Enterprise-Level Rewiring

Beyond thinking in broad strokes of AI’s applicability in the bank, McKinsey holds that an institution has to be ready to adopt multiple kinds of AI set up in a way to work with each other. This includes analytical AI — the types of AI that some banks have been using for years for credit and portfolio analysis, for instance — and generative AI, in the forms of ChatGPT and others, as well as “agentic AI.” In general, agentic AI uses AI that applies other types of AI to perform analyses and solve problems as a “virtual coworker.” It’s a developing facet of AI and, as described in the report, is meant to manage multiple AI inputs, rather than having a bank lean on one model. ... “You measure the outcomes you want to achieve and at the end of the pilot you will typically come out with a very good understanding of how to scale it,” Giovine says. Over six to 12 months after the pilot, “you can scale it over a good chunk of the domain.” And here, the consultant says, is where the bonus kicks in: Often a good deal of the work done to bring AI thinking to one domain can be re-used. This applies to both the business thinking and technology.


Synthetic data has its limits — why human-sourced data can help prevent AI model collapse

The more AI-generated content spreads online, the faster it will infiltrate datasets and, subsequently, the models themselves. And it’s happening at an accelerated rate, making it increasingly difficult for developers to filter out anything that is not pure, human-created training data. The fact is, using synthetic content in training can trigger a detrimental phenomenon known as “model collapse” or “model autophagy disorder (MAD).” Model collapse is the degenerative process in which AI systems progressively lose their grasp on the true underlying data distribution they’re meant to model. This often occurs when AI is trained recursively on content it generated, leading to a number of issues:Loss of nuance: Models begin to forget outlier data or less-represented information, crucial for a comprehensive understanding of any dataset. Reduced diversity: There is a noticeable decrease in the diversity and quality of the outputs produced by the models. Amplification of biases: Existing biases, particularly against marginalized groups, may be exacerbated as the model overlooks the nuanced data that could mitigate these biases. Generation of nonsensical outputs: Over time, models may start producing outputs that are completely unrelated or nonsensical.


The Macy’s accounting disaster: CIOs, this could happen to you

It wasn’t outright fraud or theft. But that’s merely because the employee didn’t try to steal. But the same lax safeguards that allowed expense dollars to be underreported could have just as easily allowed actual theft. “What will happen when someone actually has motivation to commit fraud? They could have just as easily kept the $150 million,” van Duyvendijk said. “They easily could have committed mass fraud without this company knowing. (Macy’s) people are not reviewing manual journals very carefully.” ... “It’s true that most ERPs are not designed to catch erroneous accounting,” she said. “However, there are software tools that allow CFOs and CAOs to create more robust controls around accounting processes and to ensure the expenses get booked to the correct P&L designation. Initiating, approving, recording transactions, and reconciling balances are each steps that should be handled by a separate member of the team. There are software tools that can assist with this process, such as those that enable use of AI analytics to assess actual spend and compare that spend to your reported expenses. Some such tools use AI to look for overriding journal entries that reverse expense items and move those expenses to a balance sheet account.”


Digital Nomads and Last-Minute Deals: How Online Data Enables Offline Adventures

Along with remote work preference, the pandemic boosted another trend. Many emerged from it more spontaneous, seeing how travel can be restricted so suddenly and for so long. Even before, millennials were ready to embrace impromptu travel, with half of them having planned last-minute vacations. For digital nomads, last-minute deals for flights and hotels are even more important as they need to adapt to changing situations quickly to strike a work-life balance on the go. This opens opportunities for websites to offer services that assist digital nomads in finding the best last-minute deals. ... Many of the first successful startups by the nomads were teaching about the nomadic lifestyle or connecting the nomads with each other. For example, some websites use APIs to aggregate data about the suitability of cities for remote work. Drawing data from various online sources in real time, such platforms can constantly provide information relevant to traveling remote workers. And the relevant information is very diverse. The aforementioned travel and hospitality prices and deals alone generate volumes of data every second. Then, there is information about security and internet stability in various locations, which requires reliable and constantly updated reviews.


It’s not what you know, it’s how you know you know it

Developers and technologists have been learning to code using online media such as blogs and videos increasingly in the last four years according to the Stack Overflow Developer Survey–60% in 2021 increased to 82% in 2024. The latest resource that developers could utilize for learning is generative AI which is emerging as a key tool that offers real-time problem-solving assistance, personalized coding tips, and innovative ways to enhance skill development seamlessly integrated within daily workflows. There has been a lot of excitement in the world of software development about AI’s potential to increase the speed of learning and access to more knowledge. Speculation abounds as to whether learning will be helped or hindered by AI advancement. Our recent survey of over 700 developers and technologists reveals the process of knowing things is just that—a process. New insights about how the Stack Overflow community learns demonstrate that software professionals prefer to gain and share knowledge through hands-on interactions. Their preferences for sourcing and contributing to groups or individuals (or AI) provides color on the evolving landscape of knowledge work.


What is data science? Transforming data into value

While closely related, data analytics is a component of data science, used to understand what an organization’s data looks like. Data science takes the output of analytics to solve problems. Data scientists say that investigating something with data is simply analysis, so data science takes analysis a step further to explain and solve problems. Another difference between data analytics and data science is timescale. Data analytics describes the current state of reality, whereas data science uses that data to predict and understand the future. ... The goal of data science is to construct the means to extract business-focused insights from data, and ultimately optimize business processes or provide decision support. This requires an understanding of how value and information flows in a business, and the ability to use that understanding to identify business opportunities. While that may involve one-off projects, data science teams more typically seek to identify key data assets that can be turned into data pipelines that feed maintainable tools and solutions. Examples include credit card fraud monitoring solutions used by banks, or tools used to optimize the placement of wind turbines in wind farms.


Tech Giants Retain Top Spots, Credit Goes to Self-Disruption

Companies today know they are not infallible in the face of evolving technologies. They are willing to disrupt their tried and tested offerings to fully capitalize on innovation. This ability of "dual transformation" - sustaining as well as reinventing the core business - is a hallmark of successful incumbents. It enables companies to optimize their existing operations while investing in the future, ensuring they are not caught flat-footed when the next wave of disruption hits. And because they have capital, talent and resources, they are already ahead of newer players. ... There is also a core cultural shift to encourage innovative thinking. Amazon implemented its famous "two-pizza teams" approach, where small, autonomous groups work on focused projects with minimal bureaucracy. Launched during the dot-com boom, Amazon subsequently ventured into successful innovations, including Prime, AWS and Alexa. Google's longstanding "20% time" policy, which allows employees to dedicate a portion of their workweek to passion projects, resulted in breakthrough products including AdSense and Google News. Drawing from decades of experience, these organizations know the whole is greater than the sum of its parts.


The Power of the Collective Purse: Open-Source AI Governance and the GovAI Coalition

Collaboration and transparency often go hand in hand. One of the most significant outcomes of the GovAI Coalition’s work is the development of open-source resources that benefit not only coalition members but also vendors and uninvolved governments. By pooling resources and expertise, the coalition is creating a shared repository of guidelines, contracting language, and best practices that any government entity can adapt to their specific needs. This collaborative, open-source initiative greatly reduces the transaction costs for government agencies, particularly those that are understaffed or under-resourced. While the more expansive budgets and technological needs of larger state and local governments sometimes lead to outsized roles in Coalition standard-setting, this allows smaller local governments, which may lack the capacity to develop comprehensive AI governance frameworks independently, to draw on the Coalition’s collective institutional expertise. This crowd-sourced knowledge ensures that even the smallest agencies can implement robust AI governance policies without having to start from scratch.


Redefining software excellence: Quality, testing, and observability in the age of GenAI

Traditional test automation has long relied on rigid, code-based frameworks, which require extensive scripting to specify exactly how tests should run. GenAI upends this paradigm by enabling intent-driven testing. Instead of focusing on rigid, script-heavy frameworks, testers can define high-level intents, like “Verify user authentication,” and let the AI dynamically generate and execute corresponding tests. This approach reduces the maintenance overhead of traditional frameworks, while aligning testing efforts more closely with business goals and ensuring broader, more comprehensive test coverage. ... QA and observability are no longer siloed functions. GenAI creates a semantic feedback loop between these domains, fostering a deeper integration like never before. Robust observability ensures the quality of AI-driven tests, while intent-driven testing provides data and scenarios that enhance observability insights and predictive capabilities. Together, these disciplines form a unified approach to managing the growing complexity of modern software systems. By embracing this symbiosis, teams not only simplify workflows but raise the bar for software excellence, balancing the speed and adaptability of GenAI with the accountability and rigor needed to deliver trustworthy, high-performing applications.



Quote for the day:

"Success is not the key to happiness. Happiness is the key to success. If you love what you are doing, you will be successful." -- Albert Schweitzer

Dily Tech Digest - December 14, 2024

How Conscious Unbossing Is Reshaping Leadership And Career Growth

Conscious unbossing presents both challenges and opportunities for organizations. On the one hand, fewer employees pursuing traditional leadership tracks can create gaps in decision-making, team development, and operational consistency. On the other hand, organizations that embrace unbossing as a cultural strategy can thrive. Novartis is a prime example, fostering a culture of curiosity and empowerment that drives both engagement and innovation. By breaking down rigid hierarchies, they’ve shown how unbossed leadership can be a strategic advantage rather than a liability. ... Conscious unbossing is transforming how we think about leadership and career progression. Organizations that adapt by redefining leadership roles, offering flexible career pathways, and building cultures rooted in curiosity and empathy will thrive. Companies like Novartis, Patagonia, and Microsoft have proven that unbossed leadership isn’t a limitation—it’s an opportunity to innovate and grow. By embracing this shift, businesses can create resilient, dynamic teams and ensure leadership continuity. However, this approach also comes with challenges that organizations must navigate to ensure its success. One potential downside is the risk of role ambiguity. 


Why agentic AI and AGI are on the agenda for 2025

We’re ready to move beyond basic now, and what we’re seeing is an evolution towards a digital co-worker – an agent. Agents are really those digital coworkers, our friends, that are going to help us to do research, write a text, and then publish it somewhere. So you set the goal – let’s say, run research on some telco and networking predictions for next year – and an agent would do the research and run it by you, and then push it to where it needs to go to get reviewed, edited, and more. You would provide it with an outcome, and it will choose the best path to get to that outcome. Right now, Chatbots are really an enhanced search engine with creative flair. But Agentic AI is the next stage of evolution, and will be used across enterprises as early as next year. This will require increased network bandwidth and deterministic connectivity, with compute closer to users – but these essentials are already being rolled out as we speak, ensuring Agentic AI is firmly on the agenda for enterprises in the new year. ... Amid the AI rush, we’ve been focused on the outcomes rather than the practicalities of how we’re accessing and storing the data being generated. But concerns are emerging. Where does the data go? Does it disappear in a big cloud? Concerns are obviously being raised in many sectors, particularly in the medical space in which, medical records cannot leave state/national borders. 


Robust Error Detection to Enable Commercial Ready Quantum Computers from Quantum Circuits

Quantum Circuits has the goal of first making components that are correct and then scaling the systems. This is part of the larger goal of making commercial ready quantum computers. What is meant by commercial ready quantum computers ? This means you can bet your business or company on the results of a quantum computer. Just as we rely today on servers and computers than provide services via cloud computer systems. Being able trust and rely on quantum computers means systems that are repeatable, predictable and trusted. They have built an 8 qubit system and enterprise customers have been using them. Customers have said that using error mitigation and error detection can enable them to get far more utility from Quantum Circuits than competing quantum computers. Error suppression and error mitigation are common techniques and have intensive efforts by most quantum computer companies and the entire Quantum computer community. Quantum Circuits’ error-detecting dual-rail qubits innovation allows errors to be detected and corrected first to avoid disrupting performance at scale. This system will enable a 10x reduction in resource requirements for scalable error correction.


5 reasons why Google's Trillium could transform AI and cloud computing - and 2 obstacles

Trillium is designed to deliver exceptional performance and cost savings, featuring advanced hardware technologies that set it apart from earlier TPU generations and competitors. Key innovations include doubled High Bandwidth Memory (HBM), which improves data transfer rates and reduces bottlenecks. Additionally, as part of its TPU system architecture, it incorporates a third-generation SparseCore that enhances computational efficiency by directing resources to the most important data paths. There is also a remarkable 4.7x increase in peak compute performance per chip, significantly boosting processing power. These advancements enable Trillium to tackle demanding AI tasks, providing a strong foundation for future developments and applications in AI. ... Trillium is not just a powerful TPU; it is part of a broader strategy that includes Gemini 2.0, an advanced AI model designed for the "agentic era," and Deep Research, a tool to streamline the management of complex machine learning queries. This ecosystem approach ensures that Trillium remains relevant and can support the next generation of AI innovations. By aligning Trillium with these advanced tools and models, Google is future-proofing its AI infrastructure, making it adaptable to emerging trends and technologies in the AI landscape.


How Industries Are Using AI Agents To Turn Data Into Decisions

In the past, this required hours of manual work to standardize the various file formats — such as converting PDFs to spreadsheets — and reconcile inconsistencies like differing terminologies for revenue or varying date formats. Today, AI agents automate these tasks with human supervision, adapting to schema changes dynamically and normalizing data as it comes in. ... While extracting insights is vital, the ultimate goal of any data workflow is to drive action. Historically, this has been the weakest link in the chain. Insights often remain in dashboards or reports, waiting for human intervention to trigger action. By the time decisions are made, the window of opportunity may already have closed. AI agents, with humans in the loop, are expediting the entire cycle by bridging the gap between analysis and execution. ... The advent of AI agents signals a new era in data management — one where workflows are no longer constrained by team bandwidth or static processes. By automating ETL, enabling real-time analysis and driving autonomous actions, these agents, with the right guardrails and human supervision, are creating dynamic systems that adapt, learn and improve over time.


The Power of Stepping Back: How Rest Fuels Leadership and Growth

It's essential to fully step back from work sometimes, especially when balancing the demands of running a business and being a parent. I find that I'm most energised and focused in the mornings, so I like to use that time to read, take notes, and reflect on different aspects of the business - whether it's strategy, growth, or new ideas. It's my creative time to think deeply and plan ahead. ... It's also important to carve out weekend days when I can fully switch off. This time away from the business helps me come back refreshed and with a clearer perspective. Even though I aim to disconnect, Lee (my husband and co-founder) and I often find ourselves discussing business because it's something we're both passionate about - strangely enough, those conversations don't feel like work. ... Stepping back from the day-to-day grind gave me the mental space to realise that while small tests have their place, they can sometimes limit your potential by encouraging cautious, safe moves. By contrast, thinking bigger and aiming for more ambitious goals has opened up a new level of creativity and opportunity. This shift in mindset has been a game-changer for us - it's unlocked several key growth areas, including new product opportunities and ways to engage with customers. 


Navigating the Future of Big Data for Business Success

Big data is no longer just a tool for competitive advantage – it has become the backbone of innovation and operational efficiency across key industries, driving billion-dollar transformations. ... The combination of artificial intelligence and big data, especially through machine learning (ML), is pushing the boundaries of what’s possible in data analysis. These technologies automate complex decision-making processes and uncover patterns that humans might miss. Google’s DeepMind AI, for instance, made a breakthrough in medical research by using data to predict protein folding, which is already speeding up drug discovery. ... Tech giants like Google and Facebook are increasing their data science teams by 20% annually, underscoring the essential role these experts play in unlocking actionable insights from vast datasets. This growing demand reflects the importance of data-driven decision-making across industries. ... AI and machine learning will also continue to revolutionize big data, playing a critical role in data-driven decision-making across industries. By 2025, AI is expected to generate $3.9 trillion in business value, with organizations leveraging these technologies to automate complex processes and extract valuable insights. 


Five Steps for Creating Responsible, Reliable, and Trustworthy AI

Model testing with human oversight is critically important. It allows data scientists to ensure the models they’ve built function as intended and root out any possible errors, anomalies, or biases. However, organizations should not rely solely on the acumen of their data scientists. Enlisting the input of business leaders who are close to the customers can help ensure that the models appropriately address customers’ needs. Being involved in the testing process also gives them a unique perspective that will allow them to explain the process to customers and alleviate their concerns.Be transparent Many organizations do not trust information from an opaque “black box.” They want to know how a model is trained and the methods it uses to craft its responses. Secrecy as to the model development and data computation processes will only serve to engender further skepticism in the model’s output. ... Continuous improvement might be the final step in creating trusted AI, but it’s just part of an ongoing process. Organizations must continue to capture, cultivate, and feed data into the model to keep it relevant. They must also consider customer feedback and recommendations on ways to improve their models. These steps form an essential foundation for trustworthy AI, but they’re not the only practices organizations should follow. 


With 'TPUXtract,' Attackers Can Steal Orgs' AI Models

The NCSU researchers used a Riscure EM probe station with a motorized XYZ table to scan the chip's surface, and a high sensitivity electromagnetic probe for capturing its weak radio signals. A Picoscope 6000E oscilloscope recorded the traces, Riscure's icWaves field-programmable gate array (FPGA) device aligned them in real-time, and the icWaves transceiver used bandpass filters and AM/FM demodulation to translate and filter out irrelevant signals. As tricky and costly as it may be for an individual hacker, Kurian says, "It can be a competing company who wants to do this, [and they could] in a matter of a few days. For example, a competitor wants to develop [a copy of] ChatGPT without doing all of the work. This is something that they can do to save a lot of money." Intellectual property theft, though, is just one potential reason anyone might want to steal an AI model. Malicious adversaries might also benefit from observing the knobs and dials controlling a popular AI model, so they can probe them for cybersecurity vulnerabilities. And for the especially ambitious, the researchers also cited four studies that focused on stealing regular neural network parameters. 


Artificial Intelligence Looms Large at Black Hat Europe

From a business standpoint, advances in AI are going to "make those predictions faster and faster, cheaper and cheaper," he said. Accordingly, "if I was in the business of security, I would try to make all of my problems prediction problems," so they could get solved by using prediction engines. What exactly these prediction problems might be remains an open question, although Zanero said other good use cases include analyzing code, and extracting information from unstructured text - for example, analyzing logs for cyberthreat intelligence purposes. "So it accelerates your investigation, but you still have to verify it," Moss said. "The verify part escapes most students," Zanero said. "I say that from experience." One verification challenge is AI often functions like a very complex, black box API, and people have to adapt their prompt to get the proper output, he said. The problem: that approach only works well when you know what the right answer should be, and can thus validate what the machine learning model is doing. "The real problematic areas in all machine learning - not just using LLMs - is what happens if you do not know the answer, and you try to get the model to give you knowledge that you didn't have before," Zanero said. "That's a deep area of research work."



Quote for the day:

"The only person you are destined to become is the person you decide to be." -- Ralph Waldo Emerson

Daily Tech Digest - December 13, 2024

The fintech revolution: How digital disruption is reshaping the future of banking

Several pivotal trends have converged to accelerate fintech adoption. The JAM trinity—Jan Dhan, Aadhaar, and Mobile—became the cornerstone of India’s fintech revolution, enabling seamless, paperless onboarding and verification for financial services. Aadhaar-enabled biometric authentication, for instance, has transformed how identity verification is conducted, making the process entirely mobile-based. Perhaps the Unified Payments Interface (UPI) is the most profound disruptor. Introduced by the Indian government as part of its push for a cashless economy, UPI has redefined peer-to-peer (P2P) and person-to-merchant (P2M) transactions. As of September 2024, UPI transactions have reached a staggering 15 billion per month, with transaction values surpassing INR 20.6 trillion, marking a 16x increase in volume and a 13x increase in value over five years. UPI’s convenience and speed have made it the default payment mode for millions, further marginalising the role of traditional banking infrastructure. At the same time, blockchain technology is emerging as a force that could dramatically reduce bank operational costs. Decentralised, secure, and transparent, blockchain allows financial institutions to overhaul their legacy systems. 


Bridging the AI Skills Gap: Top Strategies for IT Teams in 2025

Daly explained that practical applications are key to learning, and creating cross-functional teams that include AI experts can facilitate knowledge sharing and the practical application of new skills. "To prepare for 2025 and beyond, it's crucial to integrate AI and ML into the core business strategy beyond R&D investment or technical roles, but also into broader organizational talent development," she said. "This ensures all employees understand the opportunity [and] potential impact, and are trained on responsible use." ... Kayne McGladrey, IEEE senior member and field CISO at Hyperproof, said AI ethics skills are important because they ensure that AI systems are developed and used responsibly, aligning with ethical standards and societal values. "These skills help in identifying and mitigating biases, ensuring transparency, and maintaining accountability in AI operations," he explained. ... Scott Wheeler, cloud practice lead at Asperitas, said building a culture of innovation and continual learning is the first step in closing a skills gap, particularly for newer technologies like AI. "Provide access to learning resources, such as on-demand platforms like Coursera, Udemy, Wizlabs," he suggested. "Embed learning into IT projects by allocating time in the project schedule and monitor and adjust the various programs based on what works or doesn't work for your organization."


What Makes the Ideal Platform Engineer?

Platform engineers decide on a platform — consisting of many different tools, workflows and capabilities — that DevOps, developers and others in the business can use to develop and monitor the development of software. They base these decisions on what will work best for these users. ... The old adage that every business is unique applies here; platform engineering doesn’t look the same in every organization, nor do the platforms or portals that are used. But there are some key responsibilities that platform engineers will often have and skills that they require. Noam Brendel is a DevOps team lead at Checkmarx, an application security firm that has embraced platform engineering. He believes a platform engineer’s focus should be on improving developer excellence. “The perfect platform engineer helps developers by building systems that eliminate bottlenecks and increase collaboration,” he said. ... “Platform engineers need to have a strong understanding of how everything is connected and how the platform is built behind the scenes,” explained Zohar Einy, CEO of Port, a provider of open internal developer portals. He emphasized the importance of knowing how the company’s technical stack is structured and which development tools are used.


Biometrics and AI Knock Out Passwords in the Security Battle

Biometrics and AI-powered authentication have moved beyond concept to successful application. For instance, HSBC's Voice ID voice identification technology analyzes over 100 characteristics of an individual's voice, maintains a sample of the customer's voice, and compares it to the caller's voice. ... The success of implementing biometrics and AI into existing systems relies on organizations to follow best practices. Organizational leaders can assess organizational needs by conducting a security audit to identify vulnerabilities that biometrics and AI can address. This information is then used to create a roadmap for implementation considering budget, resources, and timelines. Involving appropriate staff in such discussions is essential so all stakeholders understand the factors considered in decision-making. Selecting the right technology calls for careful vendor evaluation and identification of solutions that align with the organization's requirements and compliance obligations. Once these decisions are solidified, it is prudent to use pilot programs to start the integration. Small-scale deployments test effectiveness and address any unforeseen issues before large-scale implementation.


CISA, Five Eyes issue hardening guidance for communications infrastructure

The joint guidance is in direct response to the breach of telecommunications infrastructure carried out by the Chinese government-linked hacking collective known as Salt Typhoon. ... “Although tailored to network defenders and engineers of communications infrastructure, this guide may also apply to organizations with on-premises enterprise equipment,” the guidance states. “The authoring agencies encourage telecommunications and other critical infrastructure organizations to apply the best practices in this guide.” “As of this release date,” the guidance says, “identified exploitations or compromises associated with these threat actors’ activity align with existing weaknesses associated with victim infrastructure; no novel activity has been observed. Patching vulnerable devices and services, as well as generally securing environments, will reduce opportunities for intrusion and mitigate the actors’ activity.” Visibility, a cornerstone of network defenses to monitoring, detecting, and understanding activities within their infrastructure, is pivotal in identifying potential threats, vulnerabilities, and anomalous behaviors before they escalate into significant security incidents.


Tackling software vulnerabilities with smarter developer strategies

No two developers solve a problem or build a software product the same way. Some arrive at their career through formal college education, while others are self-taught and with minimal mentorship. Styles and experiences vary wildly. Equally so, we should expect they will consider secure coding practices and guidelines with similar diversity of thought. Organizations must account for this wide diversity in its secure development practices – training, guidelines, standards. These may be foreign concepts to even a highly proficient developer, and we need to give our developers the time and space to learn and ask questions, with sufficient time to develop a secure coding proficiency. ... Best in class organizations have established ‘security champions’ programs where high-skilled developers are empowered to be a team-level resource for secure coding knowledge and best practice in order for institutional knowledge to spread. This is particularly important in remote environments where security teams may be unfamiliar or untrusted faces, and the internal development team leaders are all that much more important to set the tone and direction for adopting a security mindset and applying security principles.


Developing an AI platform for enhanced manufacturing efficiency

To power our AI Platform, we opted for a hybrid architecture that combines our on-premises infrastructure and cloud computing. The first objective was to promote agile development. The hybrid cloud environment, coupled with a microservices-based architecture and agile development methodologies, allowed us to rapidly iterate and deploy new features while maintaining robust security. The path for a microservices architecture arose from the need to flexibly respond to changes in services and libraries, and as part of this shift, our team also adopted a development method called "SCRUM" where we release features incrementally in short cycles of a few weeks, ultimately resulting in streamlined workflows.  ... The second objective is to use resources effectively. The manufacturing floor, where AI models are created, is now also facing strict cost efficiency requirements. With a hybrid cloud approach, we can use on-premises resources during normal operations and scale to the cloud during peak demand, thus reducing GPU usage costs and optimizing performance. This allows us to flexibly adapt to an expected increase in the number of users of AI Platform in the future, as well.


Privacy is a human right, and blockchain is critical to securing It

While blockchain offers decentralized and secure transactions, the lack of privacy on public blockchains can expose users to risks, from theft to persecution. In October, details emerged of one of the largest in-person crypto thefts in US history after a DC man was targeted when kidnappers were able to identify him as an early crypto investor. However, despite the case for on-chain privacy, it’s proven difficult to advance any real-world implementations. Along with the regulatory challenges faced by segments such as privacy coins and mixers, certain high-profile missteps have done little to advance the case for on-chain privacy. Worldcoin, Sam Altman’s much-touted crypto identity project that collected biometric data from users, has also failed to live up to exceptions due to, perversely, concerns from regulators about breaches of users’ data privacy. In August, the government of Kenya suspended Worldcoin’s operations following concerns about data security and consent practices. In October, the company announced it was pivoting away from the EU and towards Asian and Latin American markets, following regulatory wrangling over the European GDPR rules.


Transforming fragmented legacy controls at large banks

You’re not just talking about replacing certain components of a process with technology. There’s also a cost to this change. It’s not always on the top of the list when budgets come around. Usually, spend goes on areas that are revenue generating or more in the innovation space. It can be somewhat of a hard sell to the higher-ups as to why they would spend money to change something, and a lot of organisations aren’t great at articulating the business case for it. ... If you take operational resilience perspective, for example, that’s about being able to get your arms around your important business services, using regulatory language. Considering what is supporting them? What does it take to maintain them, keep them resilient and available, and recover them? The reality is that this used to be infinitely more straightforward. Most of the systems may have been in your own data centre in your own building. Now, the ecosystems that support most of these services are much more complex. You’ve obviously got cloud providers, SaaS providers, and third parties that you’ve outsourced to. You’ve also got a huge number of different services that, even if you’ve bought them and they’re in-house, there are a myriad of internal teams to navigate.


Why the Growing Adoption of IoT Demands Seamless Integration of IT and OT

Effective cybersecurity in OT environments requires a mix of skills and knowledge from both IT and OT teams. This includes professionals from IT infrastructure and cybersecurity, as well as control system engineers, field operations staff, and asset managers typically found in OT. ... The integration of IT and OT through advanced IoT protocols represents a major step forward in securing industrial and healthcare systems. However, this integration introduces significant challenges. I propose a new approach to IoT security that incorporates protocol-agnostic application layer security, lightweight cryptographic algorithms, dynamic key management, and end-to-end encryption, all based on zero-trust network architecture (ZTNA). ... In OT environments, remediation steps must go beyond traditional IT responses. While many IT security measures reset communication links and wipe volatile memory to prevent further compromise, additional processes are needed for identifying, classifying, and investigating cyber threats in OT systems. Furthermore, organizations can benefit from creating unified governance structures and cross-training programs that align the priorities of IT and OT teams. 



Quote for the day:

"There are three secrets to managing. The first secret is have patience. The second is be patient. And the third most important secret is patience." -- Chuck Tanner

Daily Tech Digest - December 12, 2024

The future of AI regulation is up in the air: What’s your next move?

The problem is, Jones says, is that lack of regulations boils down to a lack of accountability, when it comes to what your large language models are doing — and that includes hoovering up intellectual property. Without regulations and legal ramifications, resolving issues of IP theft will either boil down to court cases, or more likely, especially in cases where the LLM belongs to a company with deep pockets, the responsibility will slide downhill to the end users. And when profitability outweighs the risk of a financial hit, some companies are going to push the boundaries. “I think it’s fair to say that the courts aren’t enough, and the fact is that people are going to have to poison their public content to avoid losing their IP,” Jones says. “And it’s sad that it’s going to have to get there, but it’s absolutely going to have to get there if the risk is, you put it on the internet, suddenly somebody’s just ripped off your entire catalog and they’re off selling it directly as well.” ... “These massive weapons of mass destruction, from an AI perspective, they’re phenomenally powerful things. There should be accountability for the control of them,” Jones says. “What it will take to put that accountability onto the companies that create the products, I believe firmly that that’s only going to happen if there’s an impetus for it.”


Leading VPN CCO says digital privacy is "a game of chess we need to play"

Sthanu calls VPNs a first step, and puts forward secure browsers as a second. IPVanish recently launched a secure browser, which is an industry first, and something not offered by other top VPN providers. "It keeps your browser private, blocking tracking, encrypting the sessions, but also protecting your device from any malware," Sthanu said. IPVanish's secure browser utilises the cloud. Session tracking, cookies, and targeting are all eliminated, as web browsing operates in a cloud sandbox. ... Encrypting your data is a vital part of what VPNs do. AES 256-bit and ChaCha20 encryption are currently the standards for the most secure VPNs, which do an excellent job at encrypting and protecting your data. These encryption ciphers can protect you against the vast majority of cyber threats out there right now – but as computers and threats develop, security will need to develop too. Quantum computers are the next stage in computing evolution, and there will come a time, predicted to be in the next five years, where these computers can break 256-bit encryption – this is being referred to as "Q-day." Quantum computers are not readily available at this moment in time, with most found in universities or research labs, but they will become more widespread. 


Kintsugi Leaders: Conservers of talent who convert the weak into winners

Profligate Leaders are not only gluttonous in their appetite for consuming resources, they are usually also choosy about the kind they will order. Not for them the tedious effort of using their own best-selling, training cookbook for seasoning and stirring the youth coming out from the country’s stretched and creaking educational system. On the contrary, they push their HR to queue up for ready-cooked candidates outside the portals of elite institutes on day zero. ... Kintsugi Leaders can create nobility of a different kind if they follow three precepts. The central one is the willingness to bet big and take risks on untried talent. In one of my Group HR roles, eyebrows were raised when I placed the HR leadership of large businesses in the hands of young, internally groomed talent instead of picking stars from the market. ... There is a third (albeit rare) kind of HR leader: the Trusted Transformer who can convert a Profligate Leader into the Kintsugi kind. Revealable corporate examples are thin on the ground. In keeping with the Kintsugi theme, then, I have to fall back on Japan. Itō Hirobumi had a profound influence on Emperor Meiji and played a pivotal role in shaping the political landscape of Meiji-era Japan.


4 North Star Metrics for Platform Engineering Teams

“Acknowledging that DORA, SPACE and DevEx provide different slivers or different perspectives into the problem, our goal was to create a framework that encapsulates all the frameworks,” Noda said, “like one framework to rule them all, that is prescriptive and encapsulates all the existing knowledge and research we have.” DORA metrics don’t mean much at the team level, but, he continued, developer satisfaction — a key measurement of platform engineering success — doesn’t matter to a CFO. “There’s a very intentional goal of making especially the key metrics, but really all the metrics, meaningful to all stakeholders, including managers,” Noda said. “That enables the organization to create a single, shared and aligned definition of productivity so everyone can row in the same direction.” The Core 4 key metrics are:An average of diffs per engineer is used to measure speed. The Developer Experience Index, or homegrown developer experience surveys, is used to measure effectiveness. A change failure rate is used to measure quality. The percentage of time spent on new capabilities to measure impact. DX’s own DXI, which uses a standardized set of 14 Likert-scale questions — from strongly agree to strongly disagree — is currently only available to DX users.


The future of data: A 5-pillar approach to modern data management

To succeed in today’s landscape, every company — small, mid-sized or large — must embrace a data-centric mindset. This article proposes a methodology for organizations to implement a modern data management function that can be tailored to meet their unique needs. By “modern”, I refer to an engineering-driven methodology that fully capitalizes on automation and software engineering best practices. This approach is repeatable, minimizes dependence on manual controls, harnesses technology and AI for data management and integrates seamlessly into the digital product development process. ... Unlike the technology-focused Data Platform pillar, Data Engineering concentrates on building distributed parallel data pipelines with embedded business rules. It is crucial to remember that business needs should drive the pipeline configuration, not the other way around. For example, if preserving the order of events is essential for business needs, the appropriate batch, micro-batch or streaming configuration must be implemented to meet these requirements. Another key area involves managing the operational health of data pipelines, with an even greater emphasis on monitoring the quality of the data flowing through the pipeline. 


How and Why the Developer-First Approach Is Changing the Observability Landscape

First and foremost, developers aim to avoid issues altogether. They seek modern observability solutions that can prevent problems before they occur. This goes beyond merely monitoring metrics: it encompasses the entire software development lifecycle (SDLC) and every stage of development within the organization. Production issues don't begin with a sudden surge in traffic; they originate much earlier when developers first implement their solutions. Issues begin to surface as these solutions are deployed to production and customers start using them. Observability solutions must shift to monitoring all the aspects of SDLC and all the activities that happen throughout the development pipeline. This includes the production code and how it’s running, but also the CI/CD pipeline, development activities, and every single test executed against the database. Second, developers deal with hundreds of applications each day. They can’t waste their time manually tuning alerting for each application separately. The monitoring solutions must automatically detect anomalies, fix issues before they happen, and tune the alarms based on the real traffic. They shouldn’t raise alarms based on hard limits like 80% of the CPU load.


We must adjust expectations for the CISO role

The sense of vulnerability CISOs feel today is compounded by a shifting accountability model in the boardroom. As cybersecurity incidents make front-page news more frequently, boards and executive teams are paying closer attention. This increased scrutiny is a double-edged sword: on the one hand, it can mean greater support and resources; on the other, it often translates to CISOs being in the proverbial hot seat. What’s more, cybersecurity is still a rapidly evolving field with few long-standing best practices. It’s a space marked by constant adaptation, bringing a certain degree of trial and error. When an error occurs—especially one that leads to a breach—the CISO’s role is scrutinized. While the entire organization might have a role in cybersecurity, CISOs are often expected to bear the brunt of accountability. This dynamic is unsettling for many in the position, and the 99% of CISOs who fear for their job security in the event of a breach clearly illustrates this point. So, what can be done? Both organizations and CISOs are responsible for recalibrating expectations and addressing the root causes of these pervasive job security fears. For organizations, a starting point is to shift cybersecurity from a reactive to a proactive stance. Investing in continuous improvement—whether through advanced security technologies, employee training, or cyber insurance—is crucial.


Bug bounty programs can deliver significant benefits, but only if you’re ready

The most significant benefit of a bug bounty program is finding vulnerabilities an organization might not have otherwise discovered. “A bug bounty program gives you another avenue of identifying vulnerabilities that you’re not finding through other processes,” such as internal vulnerability scans, Stefanie Bartak, associate director of the vulnerability management team at NCC Group, tells CSO. Establishing a bug bounty program signals to the broader security research community that an organization is serious about fixing bugs. “For an enterprise, it’s a really good way for researchers, or anyone, to be able to contact them and report something that may not be right in their security,” Louis Nyffenegger, CEO of PentesterLab, tells CSO. Moreover, a bug bounty program will offer an organization a wider array of talent to bring perspectives that in-house personnel don’t have. “You get access to a large community of diverse thinkers, which help you find vulnerabilities you may otherwise not get good access to,” Synack’s Lance says. “That diversity of thought can’t be underestimated. Diversity of thought and diversity of researchers is a big benefit. You get a more hardened environment because you get better or additional testing in some cases.”


Harnessing SaaS to elevate your digital transformation journey

The impact of AI-driven SaaS solutions can be seen across multiple industries. In retail, AI-powered SaaS platforms enable businesses to analyze consumer behavior in real-time, providing personalized recommendations that drive sales. In manufacturing, AI optimizes supply chain management, reducing waste and increasing productivity. In the finance sector, AI-driven SaaS automates risk assessment, improving decision-making and reducing operational costs. ... As businesses continue to adopt SaaS and AI-driven solutions, the future of digital transformation looks promising. Companies are no longer just thinking about automating processes or improving efficiency, they are investing in technologies that will help them shape the future of their industries. From developing the next generation of products to understanding their customers better, SaaS and AI are at the heart of this evolution. CTOs, like myself, are now not only responsible for technological innovation but are also seen as key contributors to shaping the company’s overall business strategy. This shift in leadership focus will be critical in helping organizations navigate the challenges and opportunities of digital transformation. By leveraging AI and SaaS, we can build scalable, efficient, and innovative systems that will drive growth for years to come. 


What makes product teams effective?

More enterprises are adopting a cross-functional team model, yet many still tend to underinvest in product management. While they make sure to fill the product owner role—a person accountable for translating business needs into technology requirements—they do not always choose the right individual for the product manager role. Effective product managers are business leaders with the mindset and technical skills to guide multiple product teams simultaneously. They shape product strategy, define requirements, and uphold the bar on delivery quality, usually partnering with an engineering or technology lead in a two-in-a-box model. ... Unsurprisingly, when organizations recognize individual expertise, provide options for career progression, and base promotions on capabilities, employees are more engaged and satisfied with their teams. Similarly, by standardizing and reducing the overall number of roles, organizations naturally shift to a balanced ratio of orchestrators (minority) to doers (majority), which increases team capacity without hiring more employees. This shift helps ensure teams can meet their delivery commitments and creates a transparent environment where individuals feel empowered and informed.



Quote for the day:

“Things come to those who wait, but only the things left by those who hustle” -- Abraham Lincoln

Daily Tech Digest - December 11, 2024

Low-tech solutions to high-tech cybercrimes

The growing quality of deepfakes, including real-time deepfakes during live video calls, invites scammers, criminals, and even state-sponsored attackers to convincingly bypass security measures and steal identities for all kinds of nefarious purposes. AI-enabled voice cloning has already proved to be a massive boon for phone-related identity theft. AI enables malicious actors to bypass face recognition. protection And AI-powered bots are being deployed to intercept and use one-time passwords in real time. More broadly, AI can accelerate and automate just about any cyberattack. ... Once established (not in writing… ), the secret word can serve as a fast, powerful way to instantly identify someone. And because it’s not digital or stored anywhere on the Internet, it can’t be stolen. So if your “boss” or your spouse calls you to ask you for data or to transfer funds, you can ask for the secret word to verify it’s really them. ... Farrow emphasizes a simple way to foil spyware: reboot your phone every day. He points out that most spyware is purged with a reboot. So rebooting every day makes sure that no spyware remains on your phone. He also stresses the importance of keeping your OS and apps updated to the latest version.


7 Essential Trends IT Departments Must Tackle In 2025

Taking responsibility for cybersecurity will remain a key function of IT departments in 2025 as organizations face off against increasingly sophisticated and frequent attacks. Even as businesses come to understand that everyone from the boardroom to the shop floor has a part to play in preventing attacks, IT teams will inevitably be on the front line, with the job of securing networks, managing update and installation schedules, administering access protocols and implementing zero-trust measures. ... In 2025, AIOps are critical to enabling businesses to benefit from real-time resource optimization, automated decision-making and predictive incident resolution. This should empower the entire workforce, from marketing to manufacturing, to focus on innovation and high-value tasks rather than repetitive technical work best left to machines. ... with technology functions playing an increasingly integral role in business growth, other C-level roles have emerged to take on some of the responsibilities. As well as Chief Data Officers (CDOs) and Chief Information Security Officers (CISOs), it’s increasingly common for organizations to appoint Chief AI Officers (CIAOs), and as the role of technology in organizations continues to evolve, more C-level positions are likely to become critical.


Passkey adoption by Australian govt, banks drives wider passwordless authentication

“A key change has been to the operation of the security protocols that underpin passkeys and passwordless authentication. As this has improved over time, it has engendered more trust in the technology among technology teams and organisations, leading to increased adoption and use.” “At the same time, users have become more comfortable with biometrics to authenticate to digital services.” Implementation and enablement have also improved, leveraging templates and no-code, drag-and-drop orchestration to “allow administrators to swiftly design, test and deploy various out-of-the-box passwordless registration and authentication experiences for diverse customer identity types, all at scale, with minimal manual setup.” ... Banks are among the major drivers of passkey adoption in Australia. According to an article in the Sydney Morning Herald, National Australia Bank (NAB) chief security officer Sandro Bucchianeri says passwords are “terrible” – and on the way out. ... Specific questions pertaining to passkeys include, “Do you agree or disagree with including use of a passkey as an alternative first-factor identity authentication process?” and “Does it pose any security or fraud risks? If so, please describe these in detail.”


Why crisis simulations fail and how to fix them

Communication gaps are particularly common between technical leadership and business executives. These teams work in silos, which often causes misalignment and miscommunication. Technical staff use jargon that executives don’t fully understand, while business priorities may be unclear to the technical team. As a result, it becomes difficult to discern what requires immediate attention and communication versus what constitutes noise. This slows down critical decisions. Now throw in third-party vendors or MSPs, and this just amplifies the confusion and adds to the chaos. Role confusion is an interesting challenge. Crisis management playbooks typically have roles assigned to tasks, but no detail on what these roles mean. I have seen teams come into an exercise confident about the name of their role, but no idea what the role means in terms of actual execution. Many times, teams don’t even know that a role exists within the team or who owns it. A fitting example is a “crisis simulation secretary” — someone tasked with recording the notes for the meetings, scheduling the calls, making sure everyone has the correct numbers to dial in, etc. This may seem trivial, but it is a critical role, as you do not want to waste precious minutes trying to dial into a call. 


What CIOs are in for with the EU’s Data Act

There are many things the CIO will have to perform in light of Data Act provisions. In the meantime, as explained by Perugini, CIOs must do due diligence on the data their companies collect from connected devices and understand where they are in the value chain — whether they are the owners, users, or recipients. “If the company produces a connected industrial machine and gives it to a customer and then maintains the machine, it finds itself collecting the data as the owner,” she says. “If the company is a customer of the machine, it’s a user and co-generates the data. But if it’s a company that acquires the data of the machine, it’s a recipient because the user or the manufacturer has allowed it to make them available or participates in a data marketplace. CIOs can also see if there’s data generated by others on the market that can be used for internal analysis, and procure it. Any use or exchange of data must be regulated by an agreement between the interested parties with contracts.” The CIO will also have to evaluate contracts with suppliers, ensuring terms are compliant, and negotiate with suppliers to access data in a direct and interoperable way. Plus, the CIO has to evaluate whether the company’s IT infrastructure is suitable to guarantee interoperability and security of data as per GDPR. 


How slowing down can accelerate your startup’s growth

WIn startup culture, there’s a pervasive pressure to say “yes” to every opportunity, to grow at all costs. But I’ve learned that restraint is an underrated virtue in business. At Aloha, we had to make tough choices to stay on the path of sustainable growth. We focused on our core mission and turned down attractive but potentially distracting opportunities that would have taken resources away from what mattered most. ... One of the most persistent traps for startups is the “growth at all costs” mindset. Top-line growth can be impressive, but if it’s achieved without a path to profitability, it’s a house of cards. When I joined Aloha, we refocused our efforts on creating a financially sustainable business. This meant dialing back on some of our expansion plans to ensure we were growing within our means. ... In a world that worships speed, it takes courage to slow down. It’s not easy to resist the siren call of hypergrowth. But when you do, you create the conditions for a business that can weather storms, adapt to change, and keep thriving. Building a company on these principles doesn’t mean abandoning growth—it means ensuring that growth is meaningful and sustainable. Slow and steady may not be glamorous, but it works. 


Why business teams must stay out of application development

Citizen development is when non-tech users build business applications using no-code/low-code platforms, which automate code generation. Imagine that you need a simple leave application tool within the organization. Enterprises can’t afford to deploy their busy and expensive professional resources to build an internal tool. So, they go the citizen development way. ... Proponents of citizen development argue that the apps built with low-code platforms are highly customizable. What they mean is that they have the ability to mix and match elements and change colors. For enterprise apps, this is all in a day’s work. True customizability comes from real editable code that empowers developers to hand-code parts to handle complex and edge cases. Business users cannot build these types of features because low-code platforms themselves are not designed to handle this. ... Finally, the most important loophole that citizen development creates is security. A vast majority of security attacks happen due to human error, such as phishing scams, downloading ransomware, or improper credential management. In fact, IBM found that there has been a 71% increase this year in cyberattacks that used stolen or compromised credentials.


The rise of observability: A new era in IT Operations

Observability empowers organisations to not just detect that a problem exists, but to understand why it’s happening and how to resolve it. It’s the difference between knowing that a car has broken down and having a detailed diagnostic report that pinpoints the exact issue and suggests an effective repair. The transition from monitoring to observability is not without its challenges. Some organisations find themselves struggling with legacy systems and entrenched processes that resist change. Observability represents a shift from traditional IT operations, requiring a new mindset and skill set. However, the benefits of implementing observability practices far outweigh the initial challenges. While there may be concerns about skill gaps, modern observability platforms are designed to be user-friendly and accessible to team members at all levels. ... Implementing observability results in clear, measurable benefits, especially around improved service reliability. Because teams can identify and resolve issues quickly and proactively, downtime is minimised or eradicated. Enhanced reliability leads to better customer experiences, which is a crucial differentiator in a competitive market where user satisfaction is key.


5 Trends Reshaping the Data Landscape

With increased interest in generative AI and predictive AI, as well as supporting traditional analytical workloads, “we’re seeing a pretty massive increase of data sprawl across industries,” he observed. “They track with the realization among many of our customers that they’ve created a lot of different versions of the truth and silos of data which have different systems, both on-prem and in the cloud.” ... If a data team “can’t get the data where it needs to go, they’re not going to be able to analyze it in an efficient, secure way,” he said. “Leaders have to think about scale in new ways. There are so many systems downstream that consume data. Scaling these environments as the data is growing in many cases by almost double-digit percentages year over year is becoming unwieldy.” A proactive approach is to address these costs and silos through streamlining and simplification on a single common platform, Kethireddy urged, noting Ocient’s approach to “take the path to reducing the amount of hardware and cloud instances it takes to analyze compute-intensive workloads. We focus on minimizing costs associated with the system footprint and energy consumption.”


Serverless Computing: The Future of Programming and Application Deployment Innovations

Serverless computing enhances automated scaling for handling workload by shifting developers' focus on code development by adding and removing instances from serverless functions. This approach leads cloud providers to automate the distribution of incoming traffic from interconnected multiple instances in serverless functions. The scalability nature of serverless computing emphasizes that developers should build applications for handling large volumes of traffic with an effective cloud infrastructure environment. On the other hand, serverless functions assist in limited time within the range of milliseconds to several minutes by optimization of the application code in performance management. ... Cloud providers integrated security features of encryption and access control in infrastructure in cloud services. This measure applied automated security updates and patches in infrastructure with rapid prototype creation. However, serverless computing issues in cloud infrastructure reflect cloud services negatively. The time is taken to respond for the first time when a serverless function has been initiated. The constraints of a serverless architecture reflect a limited function lifecycle, which drastically affects its performance.
 


Quote for the day:

"If you want to be successful prepare to be doubted and tested." -- @PilotSpeaker