Showing posts with label remote work. Show all posts
Showing posts with label remote work. Show all posts

Daily Tech Digest - May 15, 2025


Quote for the day:

“Challenges are what make life interesting and overcoming them is what makes life meaningful.” -- Joshua J. Marine


How to use genAI for requirements gathering and agile user stories

The key to success is engaging end-users and stakeholders in developing the goals and requirements around features and user stories. ... GenAI should help agile teams incorporate more design thinking practices and increase feedback cycles. “GenAI tools are fundamentally shifting the role of product owners and business analysts by enabling them to prototype and iterate on requirements directly within their IDEs rapidly,” says Simon Margolis, Associate CTO at SADA. “This allows for more dynamic collaboration with stakeholders, as they can visualize and refine user stories and acceptance criteria in real time. Instead of being bogged down in documentation, they can focus on strategic alignment and faster delivery, with AI handling the technical translation.” ... “GenAI excels at aligning user stories and acceptance criteria with predefined specs and design guidelines, but the original spark of creativity still comes from humans,” says Ramprakash Ramamoorthy, director of AI research at ManageEngine. “Analysts and product owners should use genAI as a foundational tool rather than relying on it entirely, freeing themselves to explore new ideas and broaden their thinking. The real value lies in experts leveraging AI’s consistency to ground their work, freeing them to innovate and refine the subtleties that machines cannot grasp.”


5 Subtle Indicators Your Development Environment Is Under Siege

As security measures around production environments strengthen, which they have, attackers are shifting left—straight into the software development lifecycle (SDLC). These less-protected and complex environments have become prime targets, where gaps in security can expose sensitive data and derail operations if exploited. That’s why recognizing the warning signs of nefarious behavior is critical. But identification alone isn’t enough—security and development teams must work together to address these risks before attackers exploit them. ... Abnormal spikes in repository cloning activity may indicate potential data exfiltration from Software Configuration Management (SCM) tools. When an identity clones repositories at unexpected volumes or times outside normal usage patterns, it could signal an attempt to collect source code or sensitive project data for unauthorized use. ... While cloning is a normal part of development, a repository that is copied but shows no further activity may indicate an attempt to exfiltrate data rather than legitimate development work. Pull Request approvals from identities lacking repository activity history may indicate compromised accounts or an attempt to bypass code quality safeguards. When changes are approved by users without prior engagement in the repository, it could be a sign of malicious attempts to introduce harmful code or represent reviewers who may overlook critical security vulnerabilities.


Data, agents and governance: Why enterprise architecture needs a new playbook

The rapid evolution of AI and data-centric technologies is forcing organizations to rethink how they structure and govern their information assets. Enterprises are increasingly moving from domain-driven data architectures — where data is owned and managed by business domains — to AI/ML-centric data models that require large-scale, cross-domain integration. Questions arise about whether this transition is compatible with traditional EA practices. The answer: While there are tensions, the shift is not fundamentally at odds with EA but rather demands a significant transformation in how EA operates. ... Governance in an agentic architecture flips the script for EA by shifting focus to defining the domain authority of the agent to participate in an ecosystem. That encompasses the system they can interact with, the commands they can execute, the other agents they can interact with, the cognitive models they rely on and the goals that are set for them. Ensuring agents are good corporate citizens means enterprise architects must engage with business units to set the parameters for what an agent can and cannot do on behalf of the business. Further, the relationship and those parameters must be “tokenized” to authenticate the capacity to execute those actions. 

California’s location data privacy bill aims to reshape digital consent

“We’re really trying to help regulate the use of your geolocation data,” says the bill’s author, Democratic Assemblymember Chris Ward, who represents California’s 78th district, which covers parts of San Diego and surrounding areas. “You should not be able to sell, rent, trade, or lease anybody’s location information to third parties, because nobody signed up for that.” Among types of personal information, location data is especially sensitive. It reveals where people live, work, worship, protest, and seek medical care. It can expose routines, relationships, and vulnerabilities. As stories continue to surface about apps selling location data to brokers, government workers, and even bounty hunters, the conversation has expanded. What was once a debate about privacy has increasingly become a concern over how the exposure of this data infringes upon fundamental civil liberties. “Geolocation is very revealing,” says Justin Brookman, the director of technology policy at Consumer Reports, which supported the legislation. “It tells a lot about you, and it also can be a public safety issue if it gets into the wrong person’s hands.” ... Equally troubling, Ward argues, is who benefits. The companies collecting and selling this data are driven by profit, not transparency. As scholar Shoshana Zuboff has argued, surveillance capitalism doesn’t thrive because users want personalized ads. 


Digital Transformation Expert Discusses Trends

From day one, I emphasise that digital transformation isn’t just about adopting new tools—it’s about aligning those tools with business objectives, improving internal processes, and responding to changing customer expectations. To bring this to life, I use a blended approach that combines theory with real-world practice. Students explore frameworks and models that explain how businesses adapt to technological change, and then apply these to real case studies from global companies, SMEs, and my own entrepreneurial experiences. These examples give them insight into how digital transformation plays out in areas like operations, marketing, and customer relationship management (CRM). Active learning is central to my teaching. I use group work, live problem-solving, digital tool demonstrations, and hands-on simulations to help students experience digital transformation in action. I also introduce them to established business platforms and emerging technologies, encouraging them to assess their value and strategic impact. Ultimately, I aim to create an environment where students don’t just learn about digital transformation—they think like digital leaders, able to question, analyse, and apply what they’ve learned in real organisational contexts.


Building cybersecurity culture in science-driven organizations

The perception of security as a barrier is a challenge faced by many organizations, especially in environments where innovation is prioritized. The solution lies in shifting the narrative: Security are care givers for the value created in this organization. Most scientists and executives already understand the consequences of a cyberattack—lost research, stolen intellectual property, and disrupted operations. We involve them in the process. When lab leaders feel that their input has shaped security protocols, they’re more likely to support and champion those initiatives. Co-creating solutions ensures that security controls are not only effective but also practical for the scientific workflow. In short, building trust, demonstrating empathy for their challenges, and proving the value of security through action are what ultimately win buy-in. ... Shadow IT is a reality in any organization, but it’s particularly prevalent in environments like ours, where creativity and experimentation often outpace formal approval processes. While it’s important to communicate the risks of shadow IT clearly, we also recognize that outright bans are rarely effective. Instead, we focus on enabling secure alternatives. In the broader organization, we use tools to detect and prevent shadow IT, combined with strict communication around approved solutions. 


LastPass can now monitor employees' rogue reliance on shadow SaaS - including AI tools

With LastPass's browser extension for password management already well-positioned to observe -- and even restrict -- employee web usage, the security company has announced that it's diversifying into SaaS monitoring for small to midsize enterprises (SMEs). SaaS monitoring is part of a larger technology category known as SaaS Identity and Access Management, or SaaS IAM. As more employees are drawn to AI to improve productivity, the company is pitching an affordable solution to help SMEs contain the risks and costs associated with shadow SaaS; an umbrella of rogue SaaS procurement that's inclusive of shadow IT and its latest variant -- shadow AI. ... LastPass sees the new capabilities aligning with an organization's business objectives in a variety of ways. "One could be compliance," MacLennan told ZDNET. "Another could be the organization's internal sense of risk and risk management. Another could be cost because we're surfacing apps by category, in which case you'll see the whole universe of duplicative apps in use." MacLennan also noted that the new offering makes it easy to reduce costs due to the over-provisioning of SaaS licenses. For example, an organization is paying for 100 seats of some SaaS solution while the SaaS monitoring tool reveals that only 30 of those licenses are in active use.


Why ISO 42001 sets the standard for responsible AI governance

ISO 42001 is particularly relevant for organisations operating within layered supply chains, especially those building on cloud platforms. For these environments, where infrastructure, platform and software providers each play a role in delivering AI-powered services to end users, organisations must maintain a clear chain of responsibility and vendor due diligence. By defining roles across the shared responsibility model, ISO 42001 helps ensure that governance, compliance and risk management are consistent and transparent from the ground up. Doing so not only builds internal confidence but also enables partners and providers to demonstrate trustworthiness to customers across the value chain. As a result, trust management becomes a vital part of the picture by delivering an ongoing process of demonstrating transparency and control around the way organisations handle data, deploy technology, and meet regulatory expectations. Rather than treating compliance as a static goal, trust management introduces a more dynamic, ongoing approach to demonstrating how AI is governed across an organisation. By operationalising transparency, it becomes much easier to communicate security practices and explain decision-making processes to provide evidence of responsible development and deployment.


Beyond the office: Preparing for disasters in a remote work world

When disaster strikes, employees may be without electricity, internet, or cell service for days or weeks. They may have to evacuate their homes. They may be struggling with the loss of family members, friends, or neighbors. Just as organizations have disaster mitigation and recovery plans for main offices and data centers, they should be prepared to support remote employees in disaster situations they likely have never encountered before. Employers must counsel workers on what to do, provide additional resources, and above all, ensure that their mental health is attended to. ... Beyond cybersecurity risks, being forced to leave their home environment presents employees with another significant challenge: the potential loss of personal artifacts, from tax documents and family heirlooms to cherished photos. Lahiri refers to the process of safeguarding such items as “personal disaster recovery planning” and notes that this aspect of worker support is often overlooked. While companies have experience migrating servers from local offices to distributed teams, few have considered how to support employees on a personal level, he says. Lahiri urges IT teams to take a more empathetic approach and broaden their scope to include disaster recovery planning for employees’ home offices.


Beyond the Gang of Four: Practical Design Patterns for Modern AI Systems

Prompting might seem trivial at first. After all, you send free-form text to a model, so what could go wrong? However, how you phrase a prompt and what context you provide can drastically change your model's behavior, and there's no compiler to catch errors or a standard library of techniques. ... Few-Shot Prompting is one of the most straightforward yet powerful prompting approaches. Without examples, your model might generate inconsistent outputs, struggle with task ambiguity, or fail to meet your specific requirements. You can solve this problem by providing the model with a handful of examples (input-output pairs) in the prompt and then providing the actual input. You are essentially providing training data on the fly. This allows the model to generalize without re-training or fine-tuning. ... If you are a software developer trying to solve a complex algorithmic problem or a software architect trying to analyze complex system bottlenecks and vulnerabilities, you will probably brainstorm various ideas with your colleagues to understand their pros and cons, break down the problem into smaller tasks, and then solve it iteratively, rather than jumping to the solution right away. In Chain-of-Thought (CoT) prompting, you encourage the model to follow a very similar process and think aloud by breaking the problem down into a step-by-step process.

Daily Tech Digest - April 11, 2025


Quote for the day:

"Efficiency is doing the thing right. Effectiveness is doing the right thing." -- Peter F. Drucker


Legacy to Cloud: Accelerate Modernization via Containers

What could be better than a solution that lets you run applications across environments without dependency constraints? That’s where containers come in. They accelerate your modernization journey. The containerization of legacy applications liberates them from the rusty old VMs and servers that limit the scalability and agility of applications. Containerization offers benefits including agility, portability, resource efficiency, scalability and security. ... migrating legacy applications to containers is not a piece of cake. It requires careful planning and execution. Unlike cloud native applications, which are built for containers and Kubernetes, legacy applications were not designed with containerization in mind. The process demands significant time and expertise, and organizations often struggle at the very first step. Legacy monoliths, with their tightly coupled components and complex dependencies, require particularly extensive Dockerfiles. Writing Dockerfiles for legacy monoliths is complex and error-prone, often becoming a significant bottleneck in the modernization journey. ... The challenge intensifies when documentation is outdated or missing, turning what should be a modernization effort into a resource-draining archaeological expedition through layers of technical debt.


Four paradoxes of software development

No one knows how long the job will take, but the customer demands a completion date. This, frankly, is probably the biggest challenge that software development organizations face. We simply can’t be certain how long any project will take. Sure, we can estimate, but we are almost always wildly off. Sometimes we drastically overestimate the time required, but usually we drastically underestimate it. For our customers, this is both a mystery and a huge pain. ... Adding developers to a late project makes it later. Known as Brooks’s Law, this rule may be the strangest of the paradoxes to the casual observer. Normally, if you realize that you aren’t going to make the deadline for filing your monthly quota of filling toothpaste tubes, you can put more toothpaste tube fillers on the job and make the date. If you want to double the number of houses that you build in a given year, you can usually double the inputs—labor and materials—and get twice as many houses, give or take a few. ... The better you get at coding, the less coding you do. It takes many years to gain experience as a software developer. Learning the right way to code, the right way to design, and all of the rules and subtleties of writing clean, maintainable software doesn’t happen overnight. ... Software development platforms and tools keep getting better, but software takes just as long to develop and run.


Drones are the future of cybercrime

The rapid evolution of consumer drone technology is reshaping its potential uses in many ways, including its application in cyberattacks. Modern consumer drones are quieter, faster, and equipped with longer battery life, enabling them to operate further from their operators. They can autonomously navigate obstacles, track moving objects, and capture high-resolution imagery or video. ... And there are so many other uses for drones in cyberattacks: Network sniffing and spoofing: Drones can be equipped with small, modifiable computers such as a Raspberry Pi to sniff out information about Wi-Fi networks, including MAC addresses and SSIDs. The drone can then mimic a known Wi-Fi network, and if unsuspecting individuals or devices connect to it, hackers can intercept sensitive information such as login credentials. Denial-of-service attacks: Drones can carry devices to perform local de-authentication attacks, disrupting communications between a user and a Wi-Fi access point. They can also carry jamming devices to disrupt Wi-Fi or other wireless communications. Physical surveillance: Drones equipped with high-quality cameras can be used for physical surveillance to observe shift changes, gather information on security protocols, and plan both physical and cyberattacks by identifying potential entry points or vulnerabilities. 


From Silos to Strategy: Why Holistic Data Management Drives GenAI Success

While data distribution is essential to mitigate risks, it requires a unified approach to be effective. Many enterprises are recognizing the value of implementing unified data architectures that simplify storage and data management and centralize the management of diverse data platforms. These architectures, combined with intelligent data platforms, enable seamless access and analysis of data, making it easier to support analytics and ingestion by generative AI. IT managers can further enhance a system’s data analysis, network security, and introduce a hybrid cloud experience to simplify data management. Today, the tech industry is focused on streamlining how enterprises manage and optimize storage, data, and workloads and a platform-based approach to hybrid cloud management is critical to manage IT across on-premises, colocation and public cloud environments. Innovations like unified control planes and, software-defined storage solutions are being utilized to enable seamless data and application mobility. These solutions allow enterprises to move data and applications across hybrid and multi-cloud environments to optimize performance, cost, and resiliency. By simplifying cloud data management, enterprises can efficiently manage and protect globally dispersed storage environments without over-emphasizing resilience at the expense of overall system optimization.


Why remote work is a security minefield (and what you can do about it)

The remote work environment makes employees more vulnerable to phishing and social engineering attacks, as they are isolated and may find it harder to verify suspicious activities. Working from home can create a sense of comfort that leads to relaxation, making employees more prone to risky security behavior. The isolation associated with remote work can also result in impulsive decisions, increasing the likelihood of mistakes. Cybercriminals exploit this by tailoring social engineering attacks to mimic IT staff or colleagues, taking advantage of the lack of direct verification. ... To address these challenges, organizations must prioritize a security-first culture. By prioritizing cybersecurity at every level, from executives to remote workers, organizations can reduce their vulnerability to cyber threats. Additionally, companies can foster peer support networks where employees can share security tips and collaborate on solutions. Another problem that can arise with remote work is privacy. Some companies monitor employee activity to protect their data and ensure compliance with regulations. Monitoring helps detect suspicious behavior and mitigate cyber threats, but it can raise privacy concerns, especially when it involves intrusive methods like tracking keystrokes or taking periodic screenshots. To find a good balance, companies should be upfront about what they’re monitoring and why. 


Inside a Cyberattack: How Hackers Steal Data

Once a hacker breaches the perimeter, the standard practice is to beachhead (dig down) and then move laterally to find the organization’s crown jewels: their most valuable data. Within a financial or banking organization, it is likely there is a database on their server that contains sensitive customer information. A database is essentially a complicated spreadsheet, wherein a hacker can simply click Select and copy everything. In this instance, data security is essential; many organizations, however, confuse data security with cybersecurity. Organizations often rely on encryption to protect sensitive data, but encryption alone isn’t enough if the decryption keys are poorly managed. If an attacker gains access to the decryption key, they can instantly decrypt the data, rendering the encryption useless. Many organizations also mistakenly believe that encryption protects against all forms of data exposure, but weak key management, improper implementation, or side-channel attacks can still lead to compromise. To truly safeguard data, businesses must combine strong encryption with secure key management, access controls, and techniques such as tokenization or format-preserving encryption to minimize the impact of a breach. A database protected by privacy enhancing technologies (PETs), such as tokenization, becomes unreadable to hackers if the decryption key is stored offsite. 


You’re always a target, so it pays to review your cybersecurity insurance

Right now, either someone has identified your firm and your weak spots and begun a campaign of targeted phishing attacks, scam links, or credential harvesting, or they are blindly trying to use any number of known vulnerabilities on the web to crack into remote access and web properties. ... Reviewing my compliance with cyber insurance policies was a great exercise in self-assessing just how thorough my base security is, but it also revealed an important fact: that insurance requirements only scratch the surface of the types of discussions you should be having internally regarding your risks of attack. No matter if you feel you are merely at risk of being accidental roadkill on the information superhighway or are actually in the crosshairs of a malicious attacker, always review the risks not only with your cyber insurance carrier in mind, but also with what the attackers are planning. ... During the annual renewal of cyber insurance, the insurance carrier would not even consider insuring my business if we did not demonstrate that we had some fundamental protections in place. Based on the questions and bullet points, you could tell they saw the remote access, third-party vendor access, and network administrator accounts as weak points that needed additional protection.


9 steps to take to prepare for a quantum future

To get ahead of the quantum cryptography threat, companies should immediately start assessing their environment. “What we’re advising clients to do – and working on with clients today – is first go and inventory your encryption algorithms and know what you’re using,” says Saylors. That can be tricky, he adds. ... Because of the complexity of the tasks, ISG’s Saylors suggest that enterprises prioritize their efforts. The first step, he says, is to look at perimeter security. The second step is to look at the encryption around the most critical assets. And the third step is to look at the encryption around data backups. All of this needs to happen as soon as possible. In fact, according to Gartner, enterprises should have created a cryptography database by the end of 2024. Companies should have created cryptography polices and planned their transition to post-quantum encryption by the end of 2024, the research firm says. ... So everything will have to be carefully tested and some cryptographic processes may need to be rearchitected. But the bigger problem is that the new algorithms might themselves be deprecated as technology continues to evolve. Instead, Horvath and other experts recommend that enterprises pursue quantum agility. If any cryptography is hard-coded into processes, it needs to be separated out. “Make it so that any cryptography can work in there,” he says. 


Why neurodivergent perspectives are essential in AI development

Experts in academia, civil society, industry, media, and government discussed and debated the latest developments in AI safety and ethics, but representation of neurodivergent perspectives in AI development wasn’t examined. This is a huge oversight especially considering 70 million people in the US alone learn and think differently, including many in tech. Technology should be built for and serve all, so how do we make sure future AI models are accessible and unbiased if neurodivergent representation isn’t considered? It all starts at the development stage. ... A neurodivergent team also makes it easier to explore a wider range of use cases and the risks associated with applications. When you engage neurodivergent people at the development stage, you create a team that understands and prioritizes diverse ways of thinking, learning, and working. And that benefits all users. ... New data from EY found that 85% of neurodivergent employees think gen AI creates a more inclusive workplace, so it’s incumbent on more companies to level the playing field by casting a wider net to include a broader range of employees and tools needed to thrive and generate more accurate and robust datasets. Gen AI can also go a long way to help neurodivergent workers with simple tasks like productivity, quality assurance, and time management. 


Your data's probably not ready for AI - here's how to make it trustworthy

"AI and gen AI are raising the bar for quality data," according to a recent analysis published by Ashish Verma, chief data and analytics officer at Deloitte US, and a team of co-authors. "GenAI strategies may struggle without a clear data architecture that cuts across types and modalities, accounting for data diversity and bias and refactoring data for probabilistic systems," the team stated. ... "Creating a data environment with robust data governance, data lineage, and transparent privacy regulations helps ensure the ethical use of AI within the parameters of a brand promise," said Clayton. Building a foundation of trust helps prevent AI from going rogue, which can easily lead to uneven customer experiences." Across the industry, concern is mounting over data readiness for AI. "Data quality is a perennial issue that businesses have faced for decades," said Gordon Robinson, senior director of data management at SAS. There are two essential questions on data environments for businesses to consider before starting an AI program, he added. First, "Do you understand what data you have, the quality of the data, and whether it is trustworthy or not?" Second, "Do you have the right skills and tools available to you to prepare your data for AI?"


Daily Tech Digest - December 15, 2024

Navigating the Future: Cloud Migration Journeys and Data Security

To meet the requirements of DORA and future regulations, business leaders must adopt a proactive and reflexive approach to cybersecurity. Strong cyber hygiene practices must be integrated throughout the business, ensuring consistency in how data is handled, protected, and accessed. It is important to note at this juncture that enhanced data security isn’t purely focused on compliance. Modern IT researchers and business analysts have been studying what differentiates the most innovative companies for decades and have identified two key principles that help businesses achieve this: Unified Control and Federated Protection. ... Advancements in data security technologies are reshaping the cloud landscape, enabling faster and more secure migrations. Privacy Enhancing Technologies (PETs) like dynamic data masking (DDM), tokenisation, and format-preserving encryption help businesses anonymise sensitive data, reducing breach risks while keeping cloud adoption fast and flexible. However, as businesses will inevitably adopt multi-cloud strategies to support their processes, they will require interoperable security platforms that can seamlessly integrate across multiple cloud environments. 


Maximizing AI Payoff in Banking Will Demand Enterprise-Level Rewiring

Beyond thinking in broad strokes of AI’s applicability in the bank, McKinsey holds that an institution has to be ready to adopt multiple kinds of AI set up in a way to work with each other. This includes analytical AI — the types of AI that some banks have been using for years for credit and portfolio analysis, for instance — and generative AI, in the forms of ChatGPT and others, as well as “agentic AI.” In general, agentic AI uses AI that applies other types of AI to perform analyses and solve problems as a “virtual coworker.” It’s a developing facet of AI and, as described in the report, is meant to manage multiple AI inputs, rather than having a bank lean on one model. ... “You measure the outcomes you want to achieve and at the end of the pilot you will typically come out with a very good understanding of how to scale it,” Giovine says. Over six to 12 months after the pilot, “you can scale it over a good chunk of the domain.” And here, the consultant says, is where the bonus kicks in: Often a good deal of the work done to bring AI thinking to one domain can be re-used. This applies to both the business thinking and technology.


Synthetic data has its limits — why human-sourced data can help prevent AI model collapse

The more AI-generated content spreads online, the faster it will infiltrate datasets and, subsequently, the models themselves. And it’s happening at an accelerated rate, making it increasingly difficult for developers to filter out anything that is not pure, human-created training data. The fact is, using synthetic content in training can trigger a detrimental phenomenon known as “model collapse” or “model autophagy disorder (MAD).” Model collapse is the degenerative process in which AI systems progressively lose their grasp on the true underlying data distribution they’re meant to model. This often occurs when AI is trained recursively on content it generated, leading to a number of issues:Loss of nuance: Models begin to forget outlier data or less-represented information, crucial for a comprehensive understanding of any dataset. Reduced diversity: There is a noticeable decrease in the diversity and quality of the outputs produced by the models. Amplification of biases: Existing biases, particularly against marginalized groups, may be exacerbated as the model overlooks the nuanced data that could mitigate these biases. Generation of nonsensical outputs: Over time, models may start producing outputs that are completely unrelated or nonsensical.


The Macy’s accounting disaster: CIOs, this could happen to you

It wasn’t outright fraud or theft. But that’s merely because the employee didn’t try to steal. But the same lax safeguards that allowed expense dollars to be underreported could have just as easily allowed actual theft. “What will happen when someone actually has motivation to commit fraud? They could have just as easily kept the $150 million,” van Duyvendijk said. “They easily could have committed mass fraud without this company knowing. (Macy’s) people are not reviewing manual journals very carefully.” ... “It’s true that most ERPs are not designed to catch erroneous accounting,” she said. “However, there are software tools that allow CFOs and CAOs to create more robust controls around accounting processes and to ensure the expenses get booked to the correct P&L designation. Initiating, approving, recording transactions, and reconciling balances are each steps that should be handled by a separate member of the team. There are software tools that can assist with this process, such as those that enable use of AI analytics to assess actual spend and compare that spend to your reported expenses. Some such tools use AI to look for overriding journal entries that reverse expense items and move those expenses to a balance sheet account.”


Digital Nomads and Last-Minute Deals: How Online Data Enables Offline Adventures

Along with remote work preference, the pandemic boosted another trend. Many emerged from it more spontaneous, seeing how travel can be restricted so suddenly and for so long. Even before, millennials were ready to embrace impromptu travel, with half of them having planned last-minute vacations. For digital nomads, last-minute deals for flights and hotels are even more important as they need to adapt to changing situations quickly to strike a work-life balance on the go. This opens opportunities for websites to offer services that assist digital nomads in finding the best last-minute deals. ... Many of the first successful startups by the nomads were teaching about the nomadic lifestyle or connecting the nomads with each other. For example, some websites use APIs to aggregate data about the suitability of cities for remote work. Drawing data from various online sources in real time, such platforms can constantly provide information relevant to traveling remote workers. And the relevant information is very diverse. The aforementioned travel and hospitality prices and deals alone generate volumes of data every second. Then, there is information about security and internet stability in various locations, which requires reliable and constantly updated reviews.


It’s not what you know, it’s how you know you know it

Developers and technologists have been learning to code using online media such as blogs and videos increasingly in the last four years according to the Stack Overflow Developer Survey–60% in 2021 increased to 82% in 2024. The latest resource that developers could utilize for learning is generative AI which is emerging as a key tool that offers real-time problem-solving assistance, personalized coding tips, and innovative ways to enhance skill development seamlessly integrated within daily workflows. There has been a lot of excitement in the world of software development about AI’s potential to increase the speed of learning and access to more knowledge. Speculation abounds as to whether learning will be helped or hindered by AI advancement. Our recent survey of over 700 developers and technologists reveals the process of knowing things is just that—a process. New insights about how the Stack Overflow community learns demonstrate that software professionals prefer to gain and share knowledge through hands-on interactions. Their preferences for sourcing and contributing to groups or individuals (or AI) provides color on the evolving landscape of knowledge work.


What is data science? Transforming data into value

While closely related, data analytics is a component of data science, used to understand what an organization’s data looks like. Data science takes the output of analytics to solve problems. Data scientists say that investigating something with data is simply analysis, so data science takes analysis a step further to explain and solve problems. Another difference between data analytics and data science is timescale. Data analytics describes the current state of reality, whereas data science uses that data to predict and understand the future. ... The goal of data science is to construct the means to extract business-focused insights from data, and ultimately optimize business processes or provide decision support. This requires an understanding of how value and information flows in a business, and the ability to use that understanding to identify business opportunities. While that may involve one-off projects, data science teams more typically seek to identify key data assets that can be turned into data pipelines that feed maintainable tools and solutions. Examples include credit card fraud monitoring solutions used by banks, or tools used to optimize the placement of wind turbines in wind farms.


Tech Giants Retain Top Spots, Credit Goes to Self-Disruption

Companies today know they are not infallible in the face of evolving technologies. They are willing to disrupt their tried and tested offerings to fully capitalize on innovation. This ability of "dual transformation" - sustaining as well as reinventing the core business - is a hallmark of successful incumbents. It enables companies to optimize their existing operations while investing in the future, ensuring they are not caught flat-footed when the next wave of disruption hits. And because they have capital, talent and resources, they are already ahead of newer players. ... There is also a core cultural shift to encourage innovative thinking. Amazon implemented its famous "two-pizza teams" approach, where small, autonomous groups work on focused projects with minimal bureaucracy. Launched during the dot-com boom, Amazon subsequently ventured into successful innovations, including Prime, AWS and Alexa. Google's longstanding "20% time" policy, which allows employees to dedicate a portion of their workweek to passion projects, resulted in breakthrough products including AdSense and Google News. Drawing from decades of experience, these organizations know the whole is greater than the sum of its parts.


The Power of the Collective Purse: Open-Source AI Governance and the GovAI Coalition

Collaboration and transparency often go hand in hand. One of the most significant outcomes of the GovAI Coalition’s work is the development of open-source resources that benefit not only coalition members but also vendors and uninvolved governments. By pooling resources and expertise, the coalition is creating a shared repository of guidelines, contracting language, and best practices that any government entity can adapt to their specific needs. This collaborative, open-source initiative greatly reduces the transaction costs for government agencies, particularly those that are understaffed or under-resourced. While the more expansive budgets and technological needs of larger state and local governments sometimes lead to outsized roles in Coalition standard-setting, this allows smaller local governments, which may lack the capacity to develop comprehensive AI governance frameworks independently, to draw on the Coalition’s collective institutional expertise. This crowd-sourced knowledge ensures that even the smallest agencies can implement robust AI governance policies without having to start from scratch.


Redefining software excellence: Quality, testing, and observability in the age of GenAI

Traditional test automation has long relied on rigid, code-based frameworks, which require extensive scripting to specify exactly how tests should run. GenAI upends this paradigm by enabling intent-driven testing. Instead of focusing on rigid, script-heavy frameworks, testers can define high-level intents, like “Verify user authentication,” and let the AI dynamically generate and execute corresponding tests. This approach reduces the maintenance overhead of traditional frameworks, while aligning testing efforts more closely with business goals and ensuring broader, more comprehensive test coverage. ... QA and observability are no longer siloed functions. GenAI creates a semantic feedback loop between these domains, fostering a deeper integration like never before. Robust observability ensures the quality of AI-driven tests, while intent-driven testing provides data and scenarios that enhance observability insights and predictive capabilities. Together, these disciplines form a unified approach to managing the growing complexity of modern software systems. By embracing this symbiosis, teams not only simplify workflows but raise the bar for software excellence, balancing the speed and adaptability of GenAI with the accountability and rigor needed to deliver trustworthy, high-performing applications.



Quote for the day:

"Success is not the key to happiness. Happiness is the key to success. If you love what you are doing, you will be successful." -- Albert Schweitzer

Daily Tech Digest - December 03, 2024

Why DevOps Is Backward and How We Can Solve It

Perhaps the term “DevOps” simply rolls off the tongue better than “OpDev,” but the argument could be made that since development comes first, operations will follow. But if we look under the hood, most shops actually do run “OpDev” pipelines, even though they do not recognize how that came about within the organization. ... Without a very strict CI/CD pipeline and (usually) many team members keeping infrastructure safe and cost efficient, operations is a Sisyphean task, and most importantly it’s slow. ... So we need a better way to handle infrastructure without turning the ops team into firefighters rather than cooperative team members. Correspondingly we want to enable the devs to build unencumbered by strict rule sets as well as preserve the agile nature and fast pace of development. ... More realistic and easily workable methods like Nitric abstract away the platform as a service SDKs from the codebase and replace the developers’ infra requirements with a library of tools that can be referenced exactly the same, no matter where the finalized code is deployed. The operations teams can easily maintain the needed infra patterns in a centralized location, reducing the need to solve issues after code PRs. 


5 dead-end IT skills — and how to avoid becoming obsolete

In software development today, automated testing is already well established and accelerating. But new opportunities in QA will appear focused on what to test and how, he says, along with the skills necessary to identify security risks and other issues with code that’s created by AI. Jobs for experienced software test engineers won’t disappear overnight, but understanding what AI brings to the equation and making use of it could be key to stay relevant this area. “In order to survive and extend their career — whatever the job role — humans should master the art of leveraging AI as an assistant and embrace it,” Palaniappan says. ... “With the growth of cloud-native and serverless databases, employers are now more interested in your understanding of database architecture and data governance in cloud environments,” Lloyd-Townshend says. “To keep moving in the right direction in your career, it’s important to develop adaptive problem-solving skills and not just rely solely on specific technical expertise.” Hafez agrees activities around database management will be a casualty of technological evolution, especially ones focused on “repetitive activities such as backups, maintenance, and optimization.”


The dangers of fashion-driven tech decisions

The fact that some companies are having success with generative AI, or Kubernetes, or whatever, doesn’t mean that you will. Our technology decisions should be driven by what we need, not necessarily by what we read. ... Google created Kubernetes to handle cluster orchestration at massive scale. It’s a microservices-based architecture, and its complexity is only worth it at scale. For many applications, it’s overkill because, let’s face it, most companies shouldn’t pretend to run their IT like Google. So why do so many keep using it even though it clearly is wrong for their needs? ... Andrej Karpathy, part of OpenAI’s founding team and previously director of AI at Tesla, notes that when you prompt an LLM with a question, “You’re not asking some magical AI. You’re asking a human data labeler,” one “whose average essence was lossily distilled into statistical token tumblers that are LLMs.” The machines are good at combing through lots of data to surface answers, but it’s perhaps just a more sophisticated spin on a search engine. ... That might be exactly what you need, but it also might not be. Rather than defaulting to “the answer is generative AI,” regardless of the question, we’d do well to better tune how and when we use generative AI.


The race is on to make AI agents do your online shopping for you

Just as AI chatbots have proven somewhat useful for surfacing information that’s hard to find through search engines, AI shopping agents have the potential to find products or deals that you might not otherwise have found on your own. In theory, these tools could save you hours when you need to book a cheap flight, or help you easily locate a good birthday present for your brother-in-law. ... If AI shopping agents really take off, it could mean fewer people going to online storefronts, where retailers have historically been able to upsell them or promote impulse purchases. It also means that advertisers may not get valuable information about shoppers, so they can be targeted with other products. For that reason, those very advertisers and retailers are unlikely to let AI agents disrupt their industries without a fight. That’s part of why companies like Rabbit and Anthropic are training AI agents to use the ordinary user interface of a website — that is, the bot would use the site just like you do, clicking and typing in a browser in a way that’s largely indistinguishable from a real person. That way, there’s no need to ask permission to use an online service through a back end — permission that could be rescinded if you’re hurting their business.


2025 will be a bad year for remote work

CEOs don’t trust their employees to work hard at home and fear they’re watching daytime TV in their pajamas while on the clock. They intuit office presence and the supervision of employees who appear to be working as a metric for productivity. They can feel personally more comfortable when they can walk around, interact with employees, and manage and supervise in person. Some CEOs also feel the need to justify their spending on office space, office equipment, and other costs associated with office work. Whatever the reasons, there’s a general disagreement between employees, who mostly want the option to work from home, and CEOs, who mostly want to require employees to come into the office. ... The remote work revolution will take a serious hit next year, both in government and business. Then, with new generations of workers and leaders gradually rising in the workforce in the coming decade, plus remote work-enabling technologies like AI (specifically agentic AI) and augmented reality growing in capability, remote work will make a slow, inevitable, and permanent comeback. In the meantime, 2025 will be a rough year for remote workers. Bu it also represents a huge opportunity for startups and even established companies to hire the very best employees who are turned away elsewhere because they insist on working remotely.


Japan’s Next Step With Open-Source Software: Global Strategy

Japanese open-source developers are renowned for their skill, dedication, and meticulous focus on quality and detail. Their contributions have shaped global projects and produced standout achievements, such as the Ruby programming language, which exemplifies Japan's influence in open-source development. However, corporate policies in Japan have often been cautious regarding open source, particularly concerning licensing, lack of resources for future development, security worries, and other perceived limitations. While large Japanese corporations contribute significantly to open-source projects, they lag behind their U.S. and European counterparts in leveraging open-source as a core component of their products and services. This is now beginning to change. Open source is increasingly recognized as a way to accelerate development and expand global reach. Japanese companies are looking to open-source as a tool for increasing the speed of development, not just as a way to get projects up and running. ... It's true that when developing something, you should spend time-solving your own unique problems, and there is a tendency to use tools that can be combined with other existing tools to solve problems that can be solved. 


7 Critical Education Trends That Will Define Learning In 2025

As machines become more efficient at analyzing trends, crunching numbers and generating reports, the value of the skills that they still can’t replicate will grow. This means that educators should increasingly focus on nurturing these soft, "human" skills, like critical thinking, big-picture strategy, communication, emotional intelligence, leadership and teamwork. Expect to see greater integration of these into mainstream education as we train to become more effective at high-value tasks involving person-to-person interactions and navigation of complex and chaotic real-world situations. ... All learners are different – we take in information at different speeds; while some of us absorb knowledge better from videos, some benefit more from group discussions or activity-based learning. Personalized learning promises to deliver education in a way that's tailored to the specific strengths of individual students. This means tailored lesson plans, assessments and learning materials. In 2025 we will see experiments and pilot projects involving using AI to accomplish this begin to move into the mainstream, as well as the emergence of AI tutoring aids that are able to track the progress of students in real time and adjust the delivery of learning on-the-fly to create dynamic and engaging learning environments.


How an Effective AppSec Program Shifts Your Teams From Fixing to Building

While tools and processes are critical, they only address the technical side of the challenge. Ensuring a cohesive culture of cooperation between development and security teams is just as important. There must be a solid partnership between both sides for efforts to succeed. Implementing a security mentorship program can be an effective way to deliver this collaboration. By appointing senior engineers as mentors, organizations can leverage existing expertise to guide developers through secure coding practices. These mentors provide real-time support, offering just-in-time advice when critical vulnerabilities arise. This not only helps resolve security issues faster but also ensures developers can remain focused on delivering high-performance code. Such mentorships are a great opportunity for individual engineers too, offering the chance to broaden their skills and further their careers.   ... Effective AppSec doesn’t have to come at the cost of speed and innovation. Fostering collaboration between development and security teams and integrating security seamlessly into workflows will make lives easier — while ensuring there is minimal impact to production schedules.


The Evolution of Time-Series Models: AI Leading a New Forecasting Era

The power of machine learning (ML) methods in time series forecasting first gained prominence during the M4 and M5 forecasting competitions, where ML-based models significantly outperformed traditional statistical methods for the first time. In the M5 competition (2020), advanced models like LightGBM, DeepAR, and N-BEATS demonstrated the effectiveness of incorporating exogenous variables—factors like weather or holidays that influence the data but aren’t part of the core time series. This approach led to unprecedented forecasting accuracy. These competitions highlighted the importance of cross-learning from multiple related series and paved the way for developing foundation models specifically designed for time series analysis. ... Looking ahead, combining time series models with language models is unlocking exciting innovations. Models like Chronos, Moirai, and TimesFM are pushing the boundaries of time series forecasting, but the next frontier is blending traditional sensor data with unstructured text for even better results. Take the automobile industry—combining sensor data with technician reports and service notes through NLP to get a complete view of potential maintenance issues. 


Treat AI like a human: Redefining cybersecurity

Treating AI like a human is a perspective shift that will fundamentally change how cybersecurity leaders operate. This shift encourages security teams to think of AI as a collaborative partner with human failings. For example, as AI becomes increasingly autonomous, organizations will need to focus on aligning its use with the business’ goals while maintaining reasonable control over its sovereignty. However, organizations will also need to consider in policy and control design AI’s potential to manipulate the truth and produce inadequate results, much like humans do. ... Effective human oversight should include policies and processes for mapping, managing, and measuring AI risk. It also should include accountability structures, so teams and individuals are empowered, responsible, and trained. Organizations should also establish the context to frame risks related to an AI system. AI actors in charge of one part of the process rarely have full visibility or control over other parts. ... Performance indicators include analyzing, assessing, benchmarking, and ultimately monitoring AI risk and related effects. Measuring AI risks includes tracking metrics for trustworthy characteristics, social impact, and human-AI dependencies. 



Quote for the day:

"The distance between insanity and genius is measured only by success." -- Bruce Feirstein

Daily Tech Digest - January 10, 2024

It's Time to Take a Modern Approach to Password Management

Standards for decentralized identity are being advocated by recognized bodies such as W3C. While regulations and other aspects such as authorization, role, and attribute-based access are still further developing, businesses and institutions now have the opportunity to create interoperable designs that can seamlessly integrate with this new model. In this architecture, the most trusted identity providers are likely to play a dominant role as decentralized issuers (DID), which will be crucial for the adoption of VCs. Users are more likely to trust these established brands to certify their digital credentials. However, new vendors, brands, and institutions may emerge to compete in this space and position themselves as market leaders. Furthermore, a witness ledger, which offers traceability and trust of VC transactions, will likely be supported by a technology similar to blockchain network but more eco-friendly. This will enable digital merchants to verify the credibility of a credential, and ultimately their potential customers. 


Putting AI to Work: Systems of Intelligence and Actionable Agency

Pervasive AI will create a new System of Intelligence (SoI) that integrates data, technologies, platforms, and practices for the purposes of finding and understanding patterns, extracting insights, promoting efficiency and creativity, and facilitating decision-making. This will illuminate how organizations actually do on a functional basis, through real-time data inputs, allowing people greater awareness and meaningful action. Here is why: The system of intelligence is designed to work in a way that is different from traditional data systems or systems of record. Rather than requiring users to know how to extract insights from the data, the system of intelligence is designed to identify, ask questions and provide insights in a way that is easy for all users to understand. Standards and practices of the SoI are still emerging, which gives leaders the rare chance to both learn from and guide the development of a new system of work in the coming year. This is necessary work since imagining that nothing will change with AI is akin to thinking that, at the dawn of television, radio would simply be transposed wholesale, with no particular effect on culture, process, or business models.


Leveraging Blockchain Technology to Counter the Threat of Deepfake Videos

One of the fundamental features of blockchain is its immutability. Once data is added to a blockchain, it becomes virtually impossible to alter or erase. Applying this characteristic to video content could create an immutable record of the original footage, ensuring that any subsequent alterations or manipulations would be immediately apparent. ... Blockchain’s timestamping capabilities can provide a reliable chronology of when content is created, modified, or accessed. Integrating blockchain into the video creation process allows for the creation of a verifiable and transparent timeline for each piece of content. This timestamping ensures that any attempt to manipulate videos would be easily traceable, enabling swift identification of the source of misinformation and aiding in the attribution of responsibility. ... Blockchain operates on a decentralized network of nodes, each maintaining a copy of the ledger. This decentralized nature can be harnessed for video verification, where multiple nodes across the network can independently verify the authenticity of a given video.


How to Build Team Culture in a Remote-Work World

A positive team culture leads to happier employees. This may result in increased productivity over the long run. Because a positive work environment leads to things like friendships and increased levels of support between coworkers, you're more likely to see lower turnover rates and higher employee retention rates when you emphasize team culture. Positive team cultures also reduce levels of stress and anxiety among employees. With a less stressful environment, your skilled workers are more likely to remain with your company long-term. Additionally, they'll share their positive experiences as an employee. As this word-of-mouth spreads, your positive team culture may eventually result in your business becoming a sought-after place to work. In addition to these hiring and personnel benefits, positive team cultures correlate directly with profitability. You might engage in a chance discussion with a coworker over the water cooler or in the breakroom — leading to opportunities, collaborations and innovations that otherwise might not have happened.


Faster than ever: Wi-Fi 7 standard arrives

For home networks, Wi-Fi 7 enhances the performance of smart home devices, providing a more reliable connection for Internet of Things technologies. The improved bandwidth and speed are perfect for families, like mine, which have multiple devices streaming high-definition content simultaneously. Your overall Wi-Fi performance, whether it's just you or your family and friends, will see a dramatic improvement. In businesses, Wi-Fi 7 can support more devices with minimal interference. This capability makes it ideal for large offices and coworking spaces. The improved speed and stability facilitate seamless video conferencing and efficient cloud-based applications, which are essential for modern companies. All that's the good news. The bad news is that the 6 GHz wireless spectrum uses shorter wavelengths. Short wavelengths are great for fast data transfers at close range, So, they're great for connecting to your Wi-Fi 7-enabled HDTV a few feet away from your router. But short wavelengths are poor at connecting at long distances and suffer greater interference from physical obstructions, such as dense walls or floors in a building.


3 Essential Attitudes & Dispositions of Good Corporate Governance

While leadership and governance are two concepts that often work in tandem, every director must understand that they are not the same thing. Leadership refers to a person’s ability to influence the attitudes and actions of others to lead them toward a common goal. Governance, on the other hand, should be about making decisions that lead to increased corporate performance and meeting or exceeding agreed-upon targets. Being a good leader without shifting their mindset toward governance can make directors behave in selfish and territorial ways, putting undue focus on their own desires and beliefs. On the other hand, having the power to govern without the skill of leadership can often lead to passivity and a bend toward bureaucracy, which can easily stop board progress in its tracks. ... Many directors — especially those who are new to the position — struggle with speaking up when they see something happening that needs the board’s attention. They may be worried about receiving private or public backlash or derailing the company’s progress toward meeting targets. 


Why most companies suck at digital transformation

Focus on architecture in the wide, without forgetting architecture in the narrow. Enterprises need to understand the holistic architecture required to support accurate DX positive outcomes and not just focus on individual systems. This is an outcome of a comprehensive strategy, in that we’re utilizing all systems in place, including legacy and other on-premises assets, and establishing how they will work and play well with migrated or net-new systems existing on public clouds. If companies focus only on small systems or architectures, they usually neglect to understand how they will exist within a strategically defined DX ecosystem. This results in decoupled projects that may be impressive on their own but provide little or no value to the larger strategy that is more important than just the parts that make it up. ... The most significant issue is that most don’t even understand what digital transformation is, even those with the term in their titles. Instead, they focus on the tactics, meaning tools and technology, never understanding the plan to make things incrementally better.


Researchers develop technique to prevent software bugs

Baldur took several months to build. The work was done as a collaboration with Google, and built on top of a significant amount of prior research. First, whose team performed its work at Google, used Minerva, an LLM trained on a large corpus of natural-language text, and then fine-tuned it on 118GB of mathematical scientific papers and webpages containing mathematical expressions. Next, she further fine-tuned the LLM on a language, called Isabelle/HOL, in which the mathematical proofs are written. Baldur then generated an entire proof and worked in tandem with the theorem prover to check its work. When the theorem prover caught an error, it fed the proof, as well as information about the error, back into the LLM, so that it can learn from its mistake and generate a new and hopefully error-free proof. This process yields a remarkable increase in accuracy. The tool for automatically generating proofs is called Thor, which can generate proofs 57% of the time. 


Reconciling Agile Development With AI Safety

On one hand, the Agile principles of an iterative approach, regular risk management checks, cross-expertise collaboration, solicitation of third-party feedback at every stage, and adaptability to changing priorities or new findings seem well-suited to the seamless incorporation of responsible AI practices. However, we think responsible AI development will require a full revamp of the software development lifecycle, from pre-training assessments of data to post-deployment monitoring for performance and safety. Some practices, such as automated algorithmic checks (like tests for data bias and model performance metrics, all of which are part of Stanford’s HELM set of evaluations) can be utilized anywhere in the development lifecycle. Other techniques may be purely ex post, like algorithmic audits. An Agile approach avoids engineering siloes, allowing for stage-specific practices to be adopted where necessary, while ensuring stage-agnostic practices are adopted at all relevant stages, and allowing these stages to proceed in tandem.


Modern-day manufacturing: A process built on data governance

The average manufacturer generates high volumes and different types of data, including customer information, production orders, and shipment tracking, to name a few. This is further compounded with every supplier, distributor, and third party that’s added to the supply chain. Without a system to validate all this data, a manufacturer can find itself with inaccurate or incomplete data. Poor data quality not only leads to operational inefficiencies and mistakes, it also hinders the organization’s growth by limiting its ability to forecast demands and plan production runs. ... Within complex manufacturing ecosystems, it can be unclear who owns data as it flows across the supply chain. Various teams generate and use different types of data, making ownership and responsibility a challenge to pin down. ... Establishing data ownership involves identifying primary stakeholders who are responsible for ensuring the quality, security, and correct use of data assets. 



Quote for the day:

"We live in a society obsessed with public opinion. But leadership has never been about popularity." -- Marco Rubio