Showing posts with label accessibility. Show all posts
Showing posts with label accessibility. Show all posts

Daily Tech Digest - March 09, 2026


Quote for the day:

"A positive attitude will not solve all your problems. But it will annoy enough people to make it worth the effort" -- Herm Albright




Is AI Killing Sustainability?

This article examines the paradoxical relationship between the rapid growth of artificial intelligence and environmental goals. On one hand, AI's massive computational needs are driving a surge in energy consumption, with global spending projected to reach $2.52 trillion this year. This expansion is fueling an exponential rise in data center power requirements, potentially consuming as much electricity as 22% of U.S. households by 2028. However, the author argues that AI also serves as a critical tool for boosting sustainability. By analyzing vast datasets, AI can optimize supply chains, automate waste management, and enhance energy efficiency in buildings by up to 30%. The piece provides six strategic tips for organizations to utilize AI for greenhouse gas reduction, including predictive environmental risk monitoring, accurate emission reporting, and improved renewable energy integration. Despite these benefits, a tension exists between corporate "green" ambitions and financial constraints, often leading to a "lite green" approach where cost-cutting takes priority over true environmental innovation. Ultimately, while AI's infrastructure poses a significant threat to climate targets, its potential to identify high-ROI decarbonization opportunities offers a path toward reconciling technological advancement with ecological preservation, provided that organizations move beyond superficial commitments toward mature, outcome-driven strategies.


PQC roadmap remains hazy as vendors race for early advantage

The transition to post-quantum cryptography (PQC) is evolving from a theoretical concern into an urgent operational risk, prompting major security vendors to race for early market advantages. As mainstream players like Palo Alto Networks, Cisco, and IBM join specialized firms, the focus has shifted toward structured readiness offerings centered on discovery, inventory, and migration planning. A significant hurdle for organizations remains the lack of visibility into cryptographic sprawl across infrastructure, making it difficult to identify vulnerabilities in legacy algorithms like RSA. The urgency is further fueled by the “harvest now, decrypt later” threat model, where adversaries collect encrypted data today for future decryption by capable quantum computers. While NIST has finalized several PQC standards, experts suggest that the expected moment of cryptographic compromise could arrive as early as 2029, making immediate preparation essential. Despite the marketing push, some observers question whether these PQC offerings represent a new category of security tools or simply a necessary enforcement of long-overdue security hygiene, such as comprehensive asset mapping and certificate tracking. Ultimately, the migration to quantum-safe environments requires a phased approach and a commitment to crypto-agility, ensuring that enterprises can adapt to evolving cryptographic standards before legacy systems become insurmountable liabilities in a post-quantum world.


Tech Debt “For Later” Crashed Production 5 Years Later

This Devrim Ozcay’s article critiques the pervasive hype surrounding AI in DevOps, specifically addressing the gap between marketing promises and production realities. The author argues that while "autonomous remediation" and "predictive incident detection" are often touted as revolutionary, they frequently fail in complex, high-stakes environments. These tools often rely on simple logic or pattern matching, and general-purpose models like ChatGPT can be dangerous during active incidents by providing confident but entirely incorrect root cause hypotheses. Instead of relying on AI for critical judgment, the article suggests leveraging it for "assembly" tasks that alleviate the mechanical burden on engineers. This includes filtering log noise, reconstructing incident timelines from disparate sources, and drafting initial postmortem reports. By automating these time-consuming, repetitive processes, teams can reduce the duration of post-incident documentation from hours to minutes. Ultimately, the article advocates for a balanced approach where AI handles the data organization while human engineers retain sole responsibility for interpretation and decision-making. This shift allows practitioners to focus on high-leverage problem-solving rather than tedious transcription, ensuring that incident response remains both efficient and reliable without succumbing to the unrealistic expectations often presented at tech conferences.


What Is Sampling in LLMs and How Does It Relate to Ethics?

This article explores the technical mechanisms behind how AI models choose their words and the subsequent moral responsibilities of developers. Sampling is the process by which an LLM selects the next token from a probability distribution. Techniques such as temperature, Top-K, and Top-P (nucleus sampling) are used to balance creativity with accuracy. Higher temperature settings introduce more randomness, which can foster innovation but also increases the likelihood of "hallucinations" or the generation of biased and harmful content. Conversely, lower settings make the model more deterministic and reliable for factual tasks but can lead to repetitive and uninspired responses. From an ethical standpoint, the choice of sampling strategy is never neutral. It requires a delicate balance between providing a diverse range of perspectives and ensuring the safety and truthfulness of the output. The author emphasizes that organizations must transparently define their sampling parameters to mitigate risks like misinformation. Ultimately, ethical AI development hinges on understanding these technical levers, as they directly influence how a model perceives and interacts with human values, necessitating a cautious approach to model tuning that prioritizes user safety and informational integrity.


AI Won't Fix Cybersecurity, But It Could Rebalance It

The article explores the nuanced role of artificial intelligence in cybersecurity, debunking the myth that it serves as a total panacea while highlighting its potential to rebalance the long-standing asymmetric advantage held by attackers. Traditionally, cybercriminals have enjoyed a lower barrier to entry and a higher success rate because defenders must be perfect across every surface, whereas attackers only need to succeed once. With the advent of generative AI, malicious actors are leveraging the technology to craft sophisticated phishing campaigns, automate vulnerability discovery, and democratize complex malware creation. Conversely, AI empowers defenders by automating routine monitoring, identifying anomalous patterns at machine speed, and bridging the significant talent gap within the industry. This technological shift creates a perpetual arms race where AI functions as a force multiplier for both sides. Rather than eliminating threats, AI recalibrates the battlefield, allowing security teams to process vast datasets and respond to incidents with unprecedented agility. However, the human element remains indispensable; strategic oversight and critical thinking are essential to guide AI tools. Ultimately, while AI will not "fix" the inherent vulnerabilities of digital infrastructure, it offers a vital mechanism to shift the strategic advantage back toward those safeguarding the digital frontier.


AI Is Not Here to Replace People, It’s Here to Replace Waiting

In this insightful interview, Aliaksei Tulia, the Chief Technical Officer at CoinsPaid, argues that the true purpose of artificial intelligence in the financial sector is not to displace human judgment but to eliminate the friction of waiting. Tulia emphasizes that AI acts as a powerful catalyst for efficiency and speed within the digital payment ecosystem by automating repetitive, high-volume tasks that traditionally create operational bottlenecks. By handling routine duties such as document summarization, log scanning, and boilerplate coding, AI allows for a significant compression of cycle times while maintaining necessary human oversight. The article highlights how CoinsPaid integrates these intelligent tools to enhance consistency and visibility, ensuring that the platform remains robust without sacrificing control. Furthermore, the discussion explores the essential division of labor where technology manages data-heavy routine processes, freeing professionals to focus on high-level strategic decisions, complex problem-solving, and improving the overall customer experience. This pragmatic approach represents a shift where AI handles the disciplined "first pass," allowing people to dedicate their expertise to tasks requiring creativity and accountability. Ultimately, Tulia envisions a future where AI-driven automation defines industry standards, proving that the technology’s primary value lies in its ability to streamline operations for a global audience.


Dynamic UI for dynamic AI: Inside the emerging A2UI model

The article "Dynamic UI for Dynamic AI: Inside the Emerging A2UI Model" explores the transformative shift from traditional graphical user interfaces to Agent-to-User Interfaces. As AI agents become increasingly autonomous, the standard chat-based "command line" is no longer sufficient for managing complex workflows. A2UI represents a fundamental paradigm shift where the interface is dynamically generated by the AI to match the specific context and requirements of a task. Unlike static SaaS platforms with fixed menus, A2UI allows agents to create ephemeral, highly functional components—such as interactive charts, data tables, or specialized dashboards—on demand. This movement is powered by advancements like Vercel’s AI SDK and features like Anthropic’s Artifacts, which allow for real-time rendering of code and UI. The goal is to bridge the gap between human intent and machine execution by providing a rich, interactive medium that transcends simple text responses. By embracing generative UI, developers are enabling a more fluid collaboration where the software adapts to the user, rather than the user being forced to navigate rigid software structures. This evolution signals the end of "one-size-fits-all" application design, ushering in a future where every interaction produces a bespoke, temporary interface tailored specifically to the immediate problem.


AI Use at Work Is Causing “Brain Fry,” Researchers Find, Especially Among High Performers

The Futurism article "AI Use at Work Is Causing 'Brain Fry'" highlights a concerning trend where artificial intelligence, despite its promises of productivity, is significantly damaging employee mental health. A study of 1,500 workers conducted by Boston Consulting Group and the University of California, Riverside, introduced the term "AI brain fry" to describe the cognitive exhaustion resulting from excessive interaction with AI tools. Approximately 14 percent of employees—predominantly high performers in fields like software development and finance—reported symptoms such as mental "static," brain fog, and headaches. This fatigue is largely driven by information overload, rapid task-switching, and the constant, draining necessity of overseeing multiple AI agents. Rather than lightening the load, these tools often force users to work harder to manage the technology than to solve actual problems. The consequences are severe for both individuals and organizations; the research found a 33 percent increase in decision fatigue and a higher likelihood of employees quitting their jobs. Ultimately, the piece argues that while AI is marketed as a way to supercharge efficiency, it often acts as a "burnout machine" that compromises cognitive capacity and leads to costly errors or paralysis in professional environments.


Submarine cables move to the center of critical infrastructure security debate

The article examines the escalating strategic significance of submarine cables, which facilitate the vast majority of international data traffic but are increasingly vulnerable to geopolitical tensions and physical threats. A new sector report highlights how high-profile incidents, such as the 2024 Baltic Sea cable severing, have transitioned these underwater assets from ignored infrastructure into critical security priorities. Beyond intentional sabotage or "grey-zone" activities, the industry faces significant resilience challenges, including an annual average of two hundred cable faults primarily caused by commercial fishing and anchoring. This vulnerability is exacerbated by a critical shortage of specialized repair vessels and experienced personnel, complicating rapid incident response. Furthermore, the shift in ownership dynamics, where cloud hyperscalers are now primary investors, creates commercial friction with traditional operators while reshaping infrastructure architecture. Technological advancements, particularly AI-driven distributed acoustic sensing, are transforming cables into active monitoring tools, yet technical solutions alone remain insufficient. The report concludes that long-term security depends on improved international coordination and unified governance frameworks between governments and private entities. Ultimately, protecting these vital conduits requires a holistic approach that integrates technical controls, organizational readiness, and cross-border cooperation to match the scale of modern digital dependency and evolving global risks.


How DevOps Broke Accessibility

In this article on DevOps Digest, the author explores the unintended consequences that the rapid adoption of DevOps practices has had on web accessibility. While DevOps has revolutionized software development by emphasizing speed, continuous integration, and frequent deployments, these very priorities have often sidelined the inclusive design and rigorous accessibility testing required for users with disabilities. The shift-left mentality, which aims to catch bugs early, frequently fails to incorporate accessibility checks into the automated pipeline, leading to a "move fast and break things" culture that disproportionately affects those relying on assistive technologies. Furthermore, the reliance on automated testing tools—which can only detect about 30% of accessibility issues—creates a false sense of security among development teams. This technical debt accumulates quickly in fast-paced environments, making retroactive fixes costly and complex. The article argues that for DevOps to truly succeed, accessibility must be integrated as a core pillar of the development lifecycle, rather than being treated as an afterthought. Ultimately, the piece calls for a cultural shift where developers and stakeholders prioritize human-centric design alongside technical efficiency to ensure the digital world remains open and equitable for every user regardless of their physical or cognitive abilities.

Daily Tech Digest - September 03, 2024

Cloud application portability remains unrealistic

Enterprises can deploy an application across multiple cloud providers to distribute risk and reduce dependency on a single vendor. This strategy also offers leverage when negotiating terms or migrating services. It may prevent vendor lock-in and provide flexibility to optimize costs by leveraging the most cost-effective services available from different providers. That said, you’d be wrong if you think multicloud is the answer to a lack of portability. You’ll have to attach your application to native features to optimize them for the specific cloud provider. As I’ve said, portability has been derailed, and you don’t have good options. A “multiple providers” approach minimizes the negative impact but does not solve the portability problem. Build applications with portability in mind. This approach involves containerization technologies, such as Docker, and orchestration platforms, such as Kubernetes. Abstracting applications from the underlying infrastructure ensures they are compatible with multiple environments. Additionally, avoiding proprietary services and opting for open source tools can enhance portability and reduce costs associated with reconfigurations or migrations. 


Will Data Centers in Orbit Launch a New Phase of Sustainability?

Space offers an appealing solution for many of the problems that plague terrestrial data centers. Space-based data centers could use solar arrays to draw power from the sun, alleviating the burden on electrical grids here on Earth. They would not require water for cooling. They would not take up land, disturb people or wildlife. Additionally, natural disasters that can damage or wipe out data centers on Earth -- earthquakes, wildfires, floods, tsunamis -- are a non-issue in space. ... While the upsides of data centers in space are easy to imagine, what will it take to make them a reality? The Advanced Space Cloud for European Net zero emission and Data sovereignty (ASCEND) study set out to answer questions about space data centers technical feasibility and their environmental benefits. The study is funded by the European Commission as part of the Horizon Europe, a scientific research program. Thales Alenia Space led the study with a consortium of 11 partners, including research organizations and industrial companies from five European countries. Thales Alenia Space announced the results of the 16-month study at the end of June. 


Workload Protection in the Cloud: Why It Matters More Than Ever

CWP is a necessity that must not be ignored. As the adoption of cloud technology grows, the scale and complexity of threats also escalate. Here are the reasons why CWP is critical: Increased threat environment: Cyber threats are becoming more complex and frequent. CWP tools are crafted to detect and counter these changing threats in real time, delivering enhanced protection for cloud workloads exposed across various networks and environments. Protection against data breaches and compliance: Data breaches can lead to severe financial and reputational harm. CWP tools assist organizations in complying with strict regulations like GDPR, HIPAA, and PCI-DSS by implementing strong security protocols and compliance checks. Maintenance of operational integrity: It is essential for businesses to maintain the uninterrupted operation of their cloud workloads without being affected by security incidents. CWP tools offer extensive threat detection and automated responses, minimizing disruptions and upholding operational integrity. Cost implications: Security breaches can incur substantial costs. Investing in CWP tools helps avert these risks by early identification of vulnerabilities and threats, finally protecting organizations from potential financial losses due to breaches and service interruptions.


How Human-Informed AI Leads to More Accurate Digital Twins

The value of a DT is directly proportional to its accuracy, which in turn depends on the data available. But data availability remains a challenge — ironically, often in the business use cases that could benefit the most from DTs — and it’s a big reason why DTs are still in their infancy. DTs could help guide the expansion of current products to new market domains, accelerating R&D and innovation by enabling virtual experimentation. But research activities often involve exploring new territory where data is scarce or protected by patents owned by other organizations. For example, while DTs could inform an organization’s understanding of how a new topology may affect heavy construction equipment or how a smart building may behave under unusual weather conditions, there is limited data available about these new domains. ... DTs can add immense value by reducing costs and the time it takes to develop new processes, but data to develop these models is limited given that the work explores new territory. Further, data-sharing across the supply chain is sharply limited due to extreme sensitivity about intellectual property.


Leveraging AI for enhanced crime scene investigation

Importantly, as crimes are committed or solved, the algorithms and software based on them become more sophisticated. Interestingly, these algorithms use information obtained from various sources without any human intervention, reducing the chances of bias or error. With the increasing use of mobile phones and the internet, information is flooding in the form of photos, videos, audios, emails, letters, newspaper reports, speeches, social media posts, locations, and more. Various AI & ML-based algorithms are used to quickly analyse this data, perform mathematical transformations, draw inferences, and reach conclusions. This makes it possible to predict the likelihood of crimes in a very short time, which is almost impossible otherwise. A smart city-related company in Israel called ‘Cortica’ has developed software that analyzes the information obtained through CCTV. This software utilizes certain AI algorithms to recognize the faces in a crowd, identify crowd behavior and movement, and predict the likelihood and nature of a crime. Interestingly, these intelligent algorithms make it possible to analyze several terabytes of video footage in minimal time and make quite precise inferences. 


There are many reasons why companies struggle to exploit generative AI

Some qualitative remarks by executives interviewed revealed more detail on where that lack of preparedness lies. For example, a former vice president of data and intelligence for a media company told Rowan and team that the "biggest scaling challenge" for the company "was really the amount of data that we had access to and the lack of proper data management maturity." The executive continued: "There was no formal data catalog. There was no formal metadata and labeling of data points across the enterprise. We could go only as fast as we could label the data." ... Uncertainty about novel regulations is also causing companies to pause and think, Rowan and team stated in the report: "Organizations were exceedingly uncertain about the regulatory environment that may exist in the future (depending on the countries they operate in)." In response to both concerns, companies are pursuing a variety of strategies, Rowan and team found. These strategies include: "shut off access to specific Generative AI tools for staff"; "put in place guidelines to prevent staff from entering organizational data into public LLMs"; and "build walled gardens in private clouds with safeguards to prevent data leakage into the public cloud."


The role of behavioral biometrics in a world of growing cyberthreats

Behavioral biometrics might be an evolving form of biometric technology, but its foundations are already quite well established. For retail and ecommerce, for example, the lines blur slightly between the terms, ‘behavioral biometrics’ and ‘risk-based authentication’. Behavior in this sense isn’t just how people interact with their device, but the location they’re ordering from and to, or the time zone and time of day they’re looking to make a purchase. The extent of risk rises up and down relative to what is deemed ‘typical behavior’ in the broader sense and for that individual transaction. ‘Risk’ refers to the degree of confidence in authentication accuracy and will be key to the rise of behavioral biometrics in other industries too, including healthcare and banking where it is already being deployed to varying extents. It is more about the use case and whether the risk posed is suitable for passive authentication in these cases. In healthcare, for example, passive authentication wouldn’t be sufficient to access patient databases, but once logged in, it could help confirm that the same user is still active or online. ... Aside from the securitization element, behavioral biometrics can also enable improved personalization and marketing strategies. 


Data center sustainability is no longer optional

A recent empirical investigation conducted by the Borderstep Institute, in collaboration with the EU, revealed that digital technologies already account for approximately five-nine percent of global electricity consumption and carbon emissions, a number expected to increase as the demand for compute power, driven by the rise of generative artificial intelligence (gen AI) and foundation models, continues to grow. ... Databases are a significant contributor to data center workloads. They are critical for storing, managing, and retrieving large volumes of data, are computationally intensive, and significantly contribute to the overall energy consumption of data centers on thousands of database instances. Therefore, artificial intelligence database tuning will be central to any sustainability strategy to increase efficiency. ... Artificial intelligence database tuning offers a revolutionary approach to database management, enabling businesses to achieve high database performance while minimizing their environmental impact. By observing real-time data, AI can identify more effective PostgreSQL configurations that minimize energy usage. 


Building an Accessible Future in the Private Sector

Just like the public sector must make its services accessible to all groups, so must the private sector. Luckily, several regulations make accessibility a legal requirement for the private sector. The most notable is the Americans with Disabilities Act (ADA), a federal law passed in 1990 to prohibit discrimination against people with disabilities in many areas of public life. Title III of the ADA considers websites "public accommodations" and mandates that people with disabilities have equal access. However, true digital accessibility in the modern age needs to go further to ensure all digital products — websites, kiosks, mobile, and web applications — are equally accessible to people with disabilities. ... Companies leading the charge on accessibility are viewed as socially responsible and inclusive, attributes that matter to this generation of consumers. Organizations that value cultivating relationships with diverse customer groups often experience stronger customer loyalty. Brands like Apple and Microsoft are shining examples and have long been praised for providing inclusive technology and experiences. 


How to ensure cybersecurity strategies align with the company’s risk tolerance

One way for CISOs to align cybersecurity strategies with organizational risk tolerance is strategic involvement across the organization. “By forming risk committees and engaging in business discussions, CISOs can better understand and address the risks associated with new technologies and initiatives, and support the organization’s overall strategy,” Carmichael says. An information security committee is vital to this mission, according to Carl Grifka, MD of SingerLewak LLP, an advisory firm that specializes in risk and cybersecurity. “There needs to be a regular assessment of not just the cybersecurity environment, but also the risk tolerance and risk appetite, which is going to drive the controls that we’re going to put in place,” Grifka tells CSO. The committee operates as a cross-functional team that brings together different members of the business, including the executive, IT, security and maybe even a board representative on a more regular basis. Organizations low on the maturity level probably need to meet every couple of weeks, especially if they’re in a remediation phase and working to reduce gaps in the security posture. 



Quote for the day:

"Those who have succeeded at anything and don’t mention luck are kidding themselves." -- Larry King

Daily Tech Digest - July 02, 2024

The Changing Role of the Chief Data Officer

The chief data officer originally played more “defense” than “offense.” The position focused on data security, fraud protection, and Data Governance, and tended to attract people from a technical or legal background. CDOs now may take on a more offensive strategy, proactively finding ways to extract value from the data for the benefit of the wider business, and may come from an analytics or business background. Of course, in reality, the choice between offense and defense is a false one, as companies must do both. ... Major trends for CDOs in the future will include incorporating cutting-edge technology, such as generative AI, large language models, machine learning, and increasingly sophisticated forms of automation. The role is also spreading to a wider variety of industry sectors, such as healthcare, the private sector, and higher education. One of the major challenges is already in progress: responding to the COVID-19 pandemic. The pandemic hugely shook global supply chains, created new business markets, and also radically changed the nature of business itself. 


Duplicate Tech: A Bottom-Line Issue Worth Resolving

The patchwork nature of combined technologies can hinder processes and cause data fragmentation or loss. Moreover, differing cybersecurity capabilities among technologies can expose the organization to increased risk of cyberattacks, as older or less secure systems may be more vulnerable to breaches. Retaining multiple technologies may initially seem prudent in a merger or acquisition, but ultimately it proves detrimental. The drawbacks — from duplicated data and disconnected processes to inefficiencies and security vulnerabilities — far outweigh any perceived benefits, highlighting the critical need for streamlined, unified IT systems. ... There are compelling reasons to remove the dead weight of duplicate technologies and adopt a singular technology. The first step in eliminating tech redundancy is to evaluate existing technologies to determine which tools best align with current and future business needs. A collaborative approach with all relevant stakeholders is recommended to ensure the chosen solution supports organizational goals and avoids unnecessary repetition.


Disability community has long wrestled with 'helpful' technologies—lessons for everyone in dealing with AI

This disability community perspective can be invaluable in approaching new technologies that can assist both disabled and nondisabled people. You can't substitute pretending to be disabled for the experience of actually being disabled, but accessibility can benefit everyone. This is sometimes called the curb-cut effect after the ways that putting a ramp in a curb to help a wheelchair user access the sidewalk also benefits people with strollers, rolling suitcases and bicycles. ... Disability advocates have long battled this type of well-meaning but intrusive assistance—for example, by putting spikes on wheelchair handles to keep people from pushing a person in a wheelchair without being asked to or advocating for services that keep the disabled person in control. The disabled community instead offers a model of assistance as a collaborative effort. Applying this to AI can help to ensure that new AI tools support human autonomy rather than taking over. A key goal of my lab's work is to develop AI-powered assistive robotics that treat the user as an equal partner. We have shown that this model is not just valuable, but inevitable. 


What is the Role of Explainable AI (XAI) In Security?

XAI in cybersecurity is like a colleague who never stops working. While AI helps automatically detect and respond to rapidly evolving threats, XAI helps security professionals understand how these decisions are being made. “Explainable AI sheds light on the inner workings of AI models, making them transparent and trustworthy. Revealing the why behind the models’ predictions, XAI empowers the analysts to make informed decisions. It also enables fast adaptation by exposing insights that lead to quick fine-tuning or new strategies in the face of advanced threats. And most importantly, XAI facilitates collaboration between humans and AI, creating a context in which human intuition complements computational power.,” Kolcsár added. ... With XAI working behind the scenes, security teams can quickly discover the root cause of a security alert and initiate a more targeted response, minimizing the overall damage caused by an attack and limiting resource wastage. As transparency allows security professionals to understand how AI models adapt to rapidly evolving threats, they can also ensure that security measures are consistently effective. 


10 ways AI can make IT more productive

By infusing AI into business processes, enterprises can achieve levels of productivity, efficiency, consistency, and scale that were unimaginable a decade ago, says Jim Liddle, CIO at hybrid cloud storage provider Nasuni. He observes that mundane repetitive tasks, such as data entry and collection, can be easily handled 24/7 by intelligent AI algorithms. “Complex business decisions, such as fraud detection and price optimization, can now be made in real-time based on huge amounts of data,” Liddle states. “Workflows that spanned days or weeks can now be completed in hours or minutes.”  “Enterprises have long sought to drive efficiency and scale through automation, first with simple programmatic rules-based systems and later with more advanced algorithmic software,” Liddle says.  ... “By reducing boilerplating, teams can save time on repetitive tasks while automated and enhanced documentation keeps pace with code changes and project developments.” He notes that AI can also automatically create pull requests and integrate with project management software. Additionally, AI can generate suggestions to resolve bugs, propose new features, and improve code reviews.


How Tomorrow's Smart Cities Will Think For Themselves

When creating a cognitive city, the fundamental need is to move the computing power to where data is generated: where people live, work and travel. That applies whether you’re building a totally new smart city or retrofitting technology to a pre-existing ‘brownfield’ city. Either way, edge is key here. You’re dealing with information from sensors in rubbish bins, drains, and cameras in traffic lights. ... But in years to come the city itself will respond dynamically to the changing physical world, adjusting energy use in real-time to respond to the weather, for example. The evolution of monitoring has come from a machine-to-machine foundation, with the introduction of the Internet of Things (IoT) and now artificial intelligence (AI) becoming transformational in enabling smart technologies to become dynamic. Emerging AI technologies such as large language models will also play a role going forward, making it easy for both city planners and ordinary citizens to interact with the city they live in. Edge will be the key ingredient which gives us effective control of these cities of the future.


Serverless cloud technology fades away

The meaning of serverless computing became diluted over time. Originally coined to describe a model where developers could run code without provisioning or managing servers, it has since been applied to a wide range of services that do not fit its original definition. This led to a confusing loss of precision. It’s crucial to focus on the functional characteristics of serverless computing. The elements of serverless—agility, cost-efficiency, and the ability to rapidly deploy and scale applications—remain valuable. It’s important to concentrate on how these characteristics contribute to achieving business goals rather than becoming fixated on the specific technologies in use. Serverless technology will continue to fade into the background due to the rise of other cloud computing paradigms, such as edge computing and microclouds. ... The explosion of generative AI also contributed to the shifting landscape. Cloud providers are deeply invested in enabling AI-driven solutions, which often require specialized computer resources and significant data management capabilities, areas where traditional serverless models may not always excel.


Infrastructure-as-code and its game-changing impact on rapid solutions development

Automation is one of the main benefits of adopting an IaC approach. By automating infrastructure provisioning, IaC allows configuration to be accomplished at a faster pace. Automation also reduces the risk of errors that can result from manual coding, empowering greater consistency by standardizing the development and deployment of the infrastructure. ... Developers can rapidly assemble and deploy its infrastructure blocks, reusing them as needed throughout the development process. When adjustments are needed, developers can simply update the code the blocks are built on rather than making manual one-off changes to infrastructure components. Testing and tracking are more streamlined with IaC since the IaC code serves as a centralized and readily accessible source for documentation on the infrastructure. It also streamlines the testing process, allowing for automated unit testing of compliance, validation, and other processes before deploying. Additionally, IaC empowers developers to take advantage of the benefits provided by cloud computing. It facilitates direct interaction with the cloud’s exposed API, allowing developers to dynamically provision, manage, and orchestrate resources.


What is Multimodal AI? Here’s Everything You Need to Know

Multimodal AI describes artificial intelligence systems that can simultaneously process and interpret data from various sources such as text, images, audio, and video. Unlike traditional AI models that depend on a single type of data, multimodal AI provides a holistic approach to data processing. ... Although multimodal AI and generative AI share similarities, they differ fundamentally. For instance, generative AI focuses on creating new content from a single type of prompt, such as creating images from textual descriptions. In contrast, multimodal AI processes and understands different sensory inputs, allowing users to input various data types and receive multimodal outputs. ... Multimodal AI represents a significant advancement in the field of artificial intelligence. Therefore, by understanding and leveraging this advanced technology, data scientists and AI professionals can pave the way for more sophisticated, context-aware, and human-like AI systems, ultimately enriching our interaction with technology and the world around us. 


Excel Enthusiast to Supply Chain Innovator – The Journey to Building One of the Largest Analytic Platforms

While ChatGPT has helped raise awareness about AI capabilities, explaining how to integrate AI has presented challenges, especially when managing over 200 different data analytic reports. To address the different uses, Miranda has simplified AI into three categories: rule-based AI, learning AI (machine learning), and generative AI. Generative AI has emerged as the most dynamic tool among the three for executing and recording data analytics. Its versatility and adaptability make it particularly effective in capturing and processing diverse data sets, contributing to more comprehensive analytics outcomes. Miranda says, “People in analytics might not jump out of bed excited to tackle documentation, but it's a critical aspect of our work. Without proper documentation, we risk becoming a single point of failure, which is something we want to avoid.” ... These recordings are then converted into transcripts and securely stored in a containerized environment, streamlining the documentation process while ensuring data security. Because of process automation, Miranda says that the organization generated 240,000 work hours last year, and they anticipate even more this year.



Quote for the day:

"Life is like riding a bicycle. To keep your balance you must keep moving." -- Albert Einstein

Daily Tech Digest - June 30, 2024

The Unseen Ethical Considerations in AI Practices: A Guide for the CEO

AI’s “black box” problem is well-known, but the ethical imperative for transparency goes beyond just making algorithms understandable and its results explainable. It’s about ensuring that stakeholders can comprehend AI decisions, processes, and implications, guaranteeing they align with human values and expectations. Recent techniques, such as reinforcement learning from human feedback (RLHF) that aligns AI outcomes to human values and preferences, confirm that AI-based systems behave ethically. This means developing AI systems in which decisions are in accordance with human ethical considerations and can be explained in terms that are comprehensible to all stakeholders, not just the technically proficient. Explainability empowers individuals to challenge or correct erroneous outcomes and promotes fairness and justice. Together, transparency and explainability uphold ethical standards, enabling responsible AI deployment that respects privacy and prioritizes societal well-being. This approach promotes trust, and trust is the bedrock upon which sustainable AI ecosystems are built.Long-


Cyber resilience - how to achieve it when most businesses – and CISOs – don’t care

Organizations should ask themselves some serious, searching questions about why they are driven to keep doing the same thing over and over again – while spending millions of dollars in the process. As Bathurst put it: Why isn't security by design built in at the beginning of these projects, which are driving people to make the wrong decisions – decisions that nobody wants? Nobody wants to leave us open to attack. And nobody wants our national health infrastructure, ... But at this point, we should remind ourselves that, despite that valuable exercise, both the Ministry of Defence and the NHS have been hacked and/or subjected to ransomware attacks this year. In the first case, via a payroll system, which exposed personal data on thousands of staff, and in the second, via a private pathology lab. The latter incursion revealed patient blood-test data, leading to several NHS hospitals postponing operations and reverting to paper records. So, the lesson here is that, while security by design is essential for critical national infrastructure, resilience in the networked, cloud-enabled age must acknowledge that countless other systems, both upstream and downstream, feed into those critical ones.


Prominent Professor Discusses Digital Transformation, the Future of AI, Tesla, and More

“Customers are always going to have some challenges, and there are constant new technological trends evolving. Digital transformation is about intentionally moving towards making the experience more personalized by weaving new technology applications to solve customer challenges and deliver value,” shared Krishnan. However, as machine learning and GenAI help companies personalize their products and services, the tools themselves are also becoming more niche. “I think we’ll move to more domain and industry-specific generative AI and large language models. The healthcare industry will have an LLM, consumer packaged goods, education, etc,” shared Krishnan. “However, because companies will protect their own data, every large organization will create its own LLM with the private data. That’s why generative AI is interesting because it can actually get to be more personalized while also leveraging the broader knowledge. Eventually, we may all have our own individual GPTs.” ... Although new technologies such as GenAI and machine learning have had an immense impact in such a short time, Krishnan warns that guardrails are necessary, especially as our use of these tools becomes more essential.


Enhancing Your Company’s DevEx With CI/CD Strategies

Cognitive load is the amount of mental processing necessary for a developer to complete a task. Companies generally have one programming language that they use for everything. Their entire toolchain and talent pool is geared toward it for maximum productivity. On the other hand, CI/CD tools often have their own DSL. So, when developers want to alter the CI/CD configurations, they must get into this new rarely-used language. This becomes a time sink as well as causes a high cognitive load. One of the ways to avoid giving developers high cognitive load tasks without reason is to pick CI/CD tools that use a well-known language. For example, the data serialization language YAML — not always the most loved — is an industry standard that developers would know how to use. ... In software engineering, feedback loops can be measured by how quickly questions are answered. Troubleshooting issues within a CI/CD pipeline can be challenging for developers due to the need for more visibility and information. These processes often operate as black boxes, running on servers that developers may not have direct access to with software that is foreign to developers. 


Digital Accessibility: Ensuring Inclusivity in an Online World

"It starts by understanding how people with disabilities use your online platform," he said. While the accessibility issues faced by people who are blind receive considerable attention, it's crucial to address the full spectrum of disabilities that affect technology use, including auditory, cognitive, neurological, physical, speech, and visual disabilities, Henry added. ... The key is to review accessibility during content creation with a diverse group of people and address their feedback in iterations early and often. Bhowmick added that accessibility testing should always be run according to a structured testing script and mature testing methodologies to ensure reliable, reproducible, and sustainable test results. It is important to run accessibility testing during every stage of the software lifecycle: during design, before handing over the design to development, during development, and after development. A professional and thorough testing should take place before releasing the product to customers, Bhowmick said, and the test results should be made available in an accessibility conformance report (ACR) following the Voluntary Product Accessibility Template (VPAT) format.


How Cloud-Native Development Benefits SaaS

Cloud-native practices, patterns, and technologies enhance the benefits of SaaS and COTS while reducing the inherent negatives by:Providing an extensible framework for adding new capabilities to commercial applications without having to customize the core product. Leveraging API and event-driven architecture to bypass the need for custom data integrations. Still offloading the complexity of most infrastructure and security concerns to a provider while gaining additional flexibility in scale and resilience implementation. Enabling opportunities to innovate core business systems with emerging technologies such as generative AI. Enterprises relying on SaaS or COTS still need the flexibility to meet their ever-evolving business requirements. As we have seen with advances in AI over the past year, change and opportunity can arrive quickly and without warning. Chances are that your organization is already on a journey to cloud-native maturity, so take advantage of this effort by implementing technologies and patterns, like leveraging event-driven architectures and serverless functions to extend your commercial applications rather than customizing or replacing them.


Cybersecurity as a Service Market: A Domain of Innumerable Opportunities

Although traditional cybersecurity differs from cybersecurity as a service. As per the budget, size, and regulatory compliance requirements, several approaches are required. Organizations are finding it tedious to rely completely on themselves. The conventional method of fabricating an internal security team is to hire an experienced security staff who are dedicated to performing cyber security duties. While CSaaS is an option where the company outsources the security facility. A survey found that almost 72.1% of businesses find CSaaS solutions critical for their customer strategy. Let us now understand cyber security as a service market growth aspect. ... Some of the challenges in the market growth are lack of training and inadequate workforce, limited security budget among SMEs, and lack of interoperability with the information. The market in North America currently accounts for the maximum share of the revenue of the worldwide market. The growth of the market can be attributed to the high level of digitalization and the surge in the number of connected devices in the countries is projected to remain growth-propelling factors. 


Top 5 (EA) Services Every Team Lead Should Know

The topic of sustainability is on everyone’s priority list these days. It has become an integral part of sociopolitical and global concepts. Not to mention, more and more customers are asking for sustainable products and services. Or alternatively, they only want to buy from companies that act and operate sustainably themselves. Sustainability must therefore be on the strategic agenda of every company. ... To effectively collaborate with your enterprise IT and ensure the best possible support while you’re making IT-related investment decisions, your IT service providers require feedback. For this, your list of software applications must be known. Deficits and opportunities for improvement need to be identified and, above all, a coordinated investment strategy for your IT services is a must. It has to be clear how you can use your IT budget in the most efficient way. ... What do all these different services have to do with EA? A lot. If the above-mentioned services are understood as EA services, their results form a valuable contribution to the creation of a holistic view of your company – the enterprise architecture.


Ensuring Comprehensive Data Protection: 8 NAS Security Best Practices

NAS devices are convenient to use as shared storage, which means they should be connected to other nodes. Normally, those nodes are the machines inside an organization’s network. However, the growing number of gadgets per employee can lead to unintentional external connections. Internet of Things (IoT) devices are a separate threat category. Hackers can target these devices and then use them to propagate malicious codes inside corporate networks. If you connect such a device to your NAS, you risk compromising NAS security and then suffering a cyberattack. ... Malicious software remains a ubiquitous threat to any node connected to the network. Malware can steal, delete, and block access to NAS data or intercept incoming and outgoing traffic. Furthermore, the example of Stuxnet shows that powerful computer worms can disrupt and disable IT hardware or even entire production clusters. Insider threats. When planning an organization’s cybersecurity, IT experts reasonably focus on outside threats.


How to design the right type of cyber stress test for your organisation

The success of a cyber stress test largely depends on the realism and relevance of the scenarios and attack vectors used. These should be based on a thorough understanding of the current threat landscape, industry-specific risks, and emerging trends. Scenarios may range from targeted phishing campaigns and ransomware attacks to sophisticated, state-sponsored intrusions. By selecting scenarios that are plausible and aligned with your organisation’s risk profile, you can ensure that the stress test provides valuable insights and prepares your team for real-world challenges. ... A well-designed cyber stress test should encompass a range of activities, from table-top exercises and digital simulations to red team-blue team engagements and penetration testing. This multi-faceted approach allows you to assess the organisation’s capabilities across various domains, including detection, investigation, response, and recovery. Additionally, the stress test should include a thorough evaluation process, with clearly defined success criteria and mechanisms for gathering feedback and lessons learned.



Quote for the day:

“I'd rather be partly great than entirely useless.” -- Neal Shusterman

Daily Tech Digest - May 10, 2022

Tackling tech anxiety within the workforce

The average employee spends over two hours each day on work admin, manual paperwork, and unnecessary meetings. As a result, 81% of workers are unable to dedicate more than three hours of their day to creative, strategic tasks — the very work most ill-suited to machines. Fortunately, this is where digital collaboration comes in. When AI is set to automate certain processes, employees are freer to work on what they love, which often also happens to be what they do best. This extra time back then offers more opportunities to learn, create, and innovate on the job. Take Google’s ‘20% time’ rule, for instance. The policy involves Google employees spending a fifth of their week away from their usual, everyday responsibilities. Instead, they use the time to explore, work, and collaborate on exciting ideas that might not pay off immediately, or even at all, but could eventually reveal big business opportunities. It’s a win-win model for almost every business. At worst, colleagues enjoy the time to strengthen team bonds, improve problem-solving skills, and boost their morale. And at best, they uncover incredible ideas that can change the course of the company.


NFTs Emerge as the Next Enterprise Attack Vector

"The most common attacks try to trick cryptocurrency enthusiasts into handing over their wallet’s recovery phrase," he says. Users who fall for the scam often stand to lose access to their funds permanently, he says. "Bogus Airdrops, which are fake promotional giveaways, are also common and ask for recovery phrases or have the victim connect their wallets to malicious Airdrop sites, he adds, noting that many fake Airdrop sites are imitations of real NFT projects. And with so many small unverified projects around, it’s often hard to determine authenticity, he notes. Oded Vanunu, head of product vulnerability at Check Point Software, says what his company has observed by way of NFT-centric attacks is activity focused on exploiting weaknesses in NFT marketplaces and applications. "We need to understand that all NFT or crypto markets are using Web3 protocols," Vanunu says, referring to the emerging idea of a new Internet based on blockchain technology. Attackers are trying to figure out new ways to exploit vulnerabilities in applications connected to decentralized networks such as blockchain, he notes.


The OT security skills gap

Though often the responsibility for OT security is combined with the OT Infrastructure design role, in the OT world this is in my opinion less logical because it is the automation design engineer that has the wider overview of overall business functions in the system. If OT would be like IT, so primarily data manipulation, it makes sense to put the lead with OT infrastructure design. But because OT is not only data manipulation but also initiating various control actions that need to operate within a restricted operating window, it makes sense to give automation design this coordinating role. This is because automation design oversees all three skill elements and has more detailed knowledge of the production process than the OT infrastructure design role. It is very comparable to cyber security in a bank, where the lead role is linked to the overall business process and the infrastructure security is in a more supportive role. Finally, there is the process design role, what are the cyber security responsibilities for this role? First of all the process design role understands all the process deviations that can lead to trouble, and they know what that trouble is, they know how to handle it, and they have set criteria for limiting the risk that this trouble occurs.


Ransomware-as-a-service: Understanding the cybercrime gig economy and how to protect yourself

The cybercriminal economy—a connected ecosystem of many players with different techniques, goals, and skillsets—is evolving. The industrialization of attacks has progressed from attackers using off-the-shelf tools, such as Cobalt Strike, to attackers being able to purchase access to networks and the payloads they deploy to them. This means that the impact of a successful ransomware and extortion attack remains the same regardless of the attacker’s skills. RaaS is an arrangement between an operator and an affiliate. The RaaS operator develops and maintains the tools to power the ransomware operations, including the builders that produce the ransomware payloads and payment portals for communicating with victims. The RaaS program may also include a leak site to share snippets of data exfiltrated from victims, allowing attackers to show that the exfiltration is real and try to extort payment. Many RaaS programs further incorporate a suite of extortion support offerings, including leak site hosting and integration into ransom notes, as well as decryption negotiation, payment pressure, and cryptocurrency transaction services


U.S. White House releases ambitious agenda to mitigate the risks of quantum computing

The first directive, the executive order, seeks to advance QIS by placing the National Quantum Initiative Advisory Committee, the federal government’s main independent expert advisory body for quantum information science and technology, under the authority of the White House. The National Quantum Initiative, established by a law known as the NQI Act, encompasses activities by executive departments and agencies (agencies) with membership on either the National Science and Technology Council (NSTC) Subcommittee on Quantum Information Science (SCQIS) or the NSTC Subcommittee on Economic and Security Implications of Quantum Science (ESIX).” ... The national security memorandum (NSM) plans to tackle the risks posed to encryption by quantum computing. It establishes a national policy to promote U.S. leadership in quantum computing and initiates collaboration among the federal government, industry, and academia as the nation begins migrating to new quantum-resistant cryptographic standards developed by the National Institute of Standards and Technology (NIST).


Industry pushes back against India's data security breach reporting requirements

India's Internet Freedom Foundation has offered an extensive criticism of the regulations, arguing that they were formulated and announced without consultation, lack a data breach reporting mechanism that would benefit end-users, and include data localization requirements that could prevent some cross-border data flows. The foundation also points out that the privacy implications of the rules – especially five-year retention of personal information – is a very significant requirement at a time when India's Draft Data Protection Bill has proven so controversial it has failed to reach a vote in Parliament, and debate about digital privacy in India is ongoing and fierce. Indian outlet Medianama has quoted infosec researcher Anand Venkatanarayanan, who claimed one way to report security incidents to CERT-In involves a non-interactive PDF that has to be printed out and filled in by hand. Venkatanarayanan also pointed out that the rules' requirement to report incidents as trivial as port scanning has not been explained – is it one PDF per IP address scanned, or can one report cover many IP addresses?


When—and how—to prepare for post-quantum cryptography

Consider data shelf life. Some data produced today—such as classified government data, personal health information, or trade secrets—will still be valuable when the first error-corrected quantum computers are expected to become available. For instance, a long-term life insurance contract may already be sensitive to future quantum threats because it could still be active when quantum computers become commercially available. Any long-term data transferred now on public channels will be at risk of interception and future decryption. Because regulations on PQC do not yet exist, the possibility of data transferred today being decrypted in the future does not yet pose a compliance risk. For the moment, far more significant are the future consequences for organizations, for their customers and suppliers, and for those relationships. However, regulatory considerations will also become relevant as the field develops, which could speed up the need for some organizations to act. Just as with data, some critical physical systems developed today ... will still be in use when the first fully error-corrected quantum computer is expected to come online.
If we compare railways with, for example, the banking sector then we see we have some catching up to do but given the fact that we are used to dealing with risks I am confident that this sector is fully able to develop the necessary mechanisms to stay resilient to these new emerging threats. Of course, we can fall victim to some kind of attack someday just like any other organization. It is up to us to be prepared and stay resilient; I am confident we can do that. ... Actually, any technique, tactic, or procedure (TTP) that can be used in other organizations as well. What we will see is, now that our sector is speeding up the digitization process, that the attack surface is broadening and becoming more complex. Trains will become Tesla’s on rails having many connections with other digital services such as the European Rail Traffic Management System (ERTMS) and driving via Automatic Train Automation (ATO). The obvious consequence is that we need to be able to withstand those TTP’s and plan for mitigation in our digital roadmaps. In the most ideal world, we develop our services cybersafe by design and default. There’s work to do there!


How data can improve your website’s accessibility

With an understanding of how data can inform accessibility, it’s time to apply that data towards accessibility improvements. This entails framing your tracked data in the context of Web Content Accessibility Guidelines (WCAG), which provides the latest standards for ensuring web accessibility. ... WCAG 2.1 focuses on five accessibility principles. These are perceivability, operability, understandability, robustness, and conformance. Your KPIs for accessibility should be tied to these features. For example, measure conformance through the number of criteria violations that occur through site testing. This and similar metrics will help you identify areas of improvement. ... Your approach to gathering accessibility data should not be limited to one tool or testing procedure. Instead, diversify your data to ensure quality. Both quantitative and qualitative metrics factor in, including user feedback, numbers of flagged issues, and insights from all kinds of tests and validation procedures. ... The gamut of usability considerations is broader than most testers can accommodate in one go. 


Low Code: Satisfying Meal or Junk Food?

“If low code is treated as strictly an IT tool and excludes the line of business -- just like manual coding -- you seriously run the risk of just creating new technical debt, but with pictures this time,” says Rachel Brennan, vice president of Product Marketing at Bizagi, a low-code process automation provider. However, when no-code and low-code platforms are used as much by citizen developers as by software developers, whether it satisfies the hunger for more development stems from “how” it is used rather than by whom. But first, it's important to note the differences between low-code platforms for developers and those for citizen developers. Low code for the masses usually means visual tools and simple frameworks that mask the complex coded operations that lie beneath. Typically, these tools can only realistically be used for fairly simple applications. “Low-code tools for developers offer tooling, frameworks, and drag-and drop options but ALSO include the option to code when the developer wants to customize the application -- for example, to develop APIs, or to integrate the application with other systems, or to customize front end interfaces,” explains Miguel Valdes Faura



Quote for the day:

"One machine can do the work of fifty ordinary men. No machine can do the work of one extraordinary man." -- Elbert Hubbard

Daily Tech Digest - June 22, 2021

What makes a real-time enterprise?

Being a ‘real-time’ enterprise today is typically evaluated under two criteria: the ability to capture, collect and store data as it comes in; and the ability to respond to it at the point of consumption. Analytics solutions that allow for this are highly sought after, as it’s considered a huge competitive differentiator and critical capability in our fast-paced digital world. However, while there’s much buzzword bingo about real-time data, decision-making and insight, the readiness of the enterprise to become real-time is varied due to a lack of understanding in how it practically aligns with their goals, resulting in lost opportunities and wasted resources. ... We find the sudden hurried shift among enterprises to grasp real-time analytics typically starts when organisations examine their data and see they are not making decisions fast enough to affect business outcomes. Many organisations potentially misconstrue the cause of these common analytics problems as a lack of real-time analytics capability, when there are likely several other factors at play preventing them from making decisions efficiently and effectively, such as a long and arduous analysis process, analysis fatigue and human bias resulting in accidental discovery, and a lack of guidance in understanding what the insights mean.


Does Your Cyberattack Plan Include a Crisis Communications Strategy?

During a cyberattack, one of the most overlooked — and consequential — areas for enterprises is implementing an effective crisis communications strategy. Just as you need to shore up the technology, legal, financial, and compliance aspects of your cybersecurity preparation plan, you must also prioritize crisis management and communications But where should you start? Below are five crisis communications tips to form the foundation of your strategy. ... Our media landscape is characterized by a 24/7 news cycle, ubiquitous social media channels, and misinformation powered by algorithmic artificial intelligence (AI) and delivered instantly on a global scale to billions of people. This shows no sign of abating. What does that mean? Time is not on your side. But with an actionable plan in place, you will be much better prepared. ... With your crisis communications framework in place, it is time for action. Picture this: your company is the target of a ransomware attack. And while desperately trying to address the incident, media are beginning to report the incident, citing reports on Twitter. 


How to Retain Your IT Talent

It seems easy to create an open and collaborative work culture, but in IT it can be a special challenge. This is because the nature of IT work is factual and introspective. It's easy to get buried in a project and forget to communicate status to a workmate -- or to be consumed by planning or budgeting as a CIO and forget to “walk the floor” and visit with staff members. Those heading up IT can make a conscious effort to improve open communication and engagement by setting an example of personal engagement with staff themselves. When staff members understand IT’s strategic direction because the CIO has directly communicated it to them, as well as why they are undertaking certain projects, work becomes purposeful. Team members also benefit if they know that support is available when they need it, and when they know that they can freely go to anyone's office, from the CIO on down. The net result is that people are happier at work, and less likely to leave an inclusive work culture. ... From here, training and mentoring plans for developing employee potential should be defined and followed. Career and skills development plans should be targeted for up-and-coming employees and recent hires, and also for longer-term staff who want to cross train and learn something new.


The positive levers of a digital transformation journey

It’s not just processes. People play an equally important role in the transformation exercise. Shifting from a traditional workplace to a digital one involves an overall change in the mindset of the people behind the business. A company’s culture and behaviour determine how well it can adapt to being ‘digital first’. To undertake digital transformation seamlessly, many organisations ensure transparency by communicating their expectations clearly to their employees. This transformation also helps in highlighting skill gaps within the organisation and sheds light on which of these gaps can be filled by AI and automation, allowing for the repurposing of employee intelligence. Rahul Tandon, head, digital transformation at BPCL said, “Many initiatives and developments are bringing in a lot of automation and AI with a clear objective to absolve our field teams of all repetitive transactional activities and focus solely on business development and efficient customer interactions.” This approach, he says, has infused new energy to the field teams. “We hope it will become the preferred choice for all stakeholders and eventually impact our bottom line positively.”


How to rethink risks with new cloud deployments

With microservices, you have hundreds of different functions running separately, each with their own unique purpose and triggered from different events. Each one of these functions requires its own unique authentication protocol, and that leaves room for error. Attackers will look for things like a forgotten resource or redundant code, or open APIs with known security gaps to gain access to the environment. This will then allow the attacker to gain access to a website containing sensitive content or functions, without having to authenticate properly. While the service provider will handle much of the password management and recovery workflows, it is up to the customers to make sure that the resources themselves are properly configured. However, things get more complicated when functionality is not triggered from an end-user request, but rather during the application flow, in such a way as to bypass the authentication schema. To address this issue, it is important to have continuous monitoring of your application, including the application flow, so you can identify application triggers. From there, you will want to create and categorize alerts for when resources fail to include the appropriate permissions, have redundant permissions, or the triggered behavior is anomalous or non-compliant.


How Containers Simplify DevOps Workflows and CI/CD Pipelines

DevOps has created a way to automate processes to build, test and code faster and more reliably. Continuous integration/continuous delivery (CI/CD) isn’t a novel concept, but tools like Jenkins have done much to define what a CI/CD pipeline should look like. While DevOps represents a cultural change in the organization, CI/CD is the core engine that drives the success of DevOps. With CI, teams must implement smaller changes more often, but they check the code with the version control repositories. Therefore, there is a lot more consistency in the building, packing and testing of apps, leading to better collaboration and software quality. CD begins where CI ends. Since teams work on several environments (prod, dev, test, etc.), the role of CD is to automate code deployment to these environments and execute service calls to databases and servers. The CI/CD concept isn’t entirely new, but it’s only now that we have the right tools to fully reap the benefits of CI/CD. Containers make it extremely easy to implement a CI/CD pipeline and enable a much more collaborative culture.


Automation Is a Game Changer, Not a Job Killer

While many businesses embrace the positives of digitization, employees approach these changes with far less enthusiasm. Words like “automation” and “digitization” are loaded with baggage, invoking negative associations of job loss. Employees are quick to assume the worst, fearing they’ll be left behind or eliminated. But is that fear warranted? Not so, according to BDO’s recent survey of middle market executives. The majority of companies are adding new digital enablement projects, with 34% planning to increase headcount and 42% comprehensively re-imagining job roles. Only 22% expect the use of automation to have a negative impact on headcount. In most cases, jobs are changing and evolving, requiring employees to work alongside new technologies, develop new skill sets and integrate automation into their daily work lives. But for these digital initiatives to succeed, organizations need to secure employee buy-in. Otherwise, initiatives will fall well short of reaching maximum ROI. So, how can CIOs and IT leaders change resistance into adoption and dispel unwarranted fears among the workforce?


Bugs in NVIDIA’s Jetson Chipset Opens Door to DoS Attacks, Data Theft

The most severe bug, tracked as CVE‑2021‑34372, opens the Jetson framework to a buffer-overflow attack by an adversary. According to the NVIDIA security bulletin, the attacker would need network access to a system to carry out an attack, but the company warned the vulnerability is not complex to exploit and that an adversary with little to low access rights could launch it. It added that an attack could give an adversary persistent access to components – other than the NVIDIA chipset targeted – and allow a hacker to manipulate and or sabotage a targeted system. “[The Jetson] driver contains a vulnerability in the NVIDIA OTE protocol message parsing code where an integer overflow in a malloc() size calculation leads to a buffer overflow on the heap, which might result in information disclosure, escalation of privileges and denial of service (DoS),” according to the security bulletin, posted on Friday. Oblivious transfer extensions (OTE) are low-level cryptographic algorithms used by Jetson chipsets to process private-set-intersection protocols used to secure data as the chip processes data.


How can technology design be made more inclusive?

With an increasing reliance on screens to communicate, organisations should also look to ensure that product design addresses how the software facilitates this, and make adjustments where necessary. “Brands must consider all forms of disabilities, such as vision and hearing impairments, as well as conditions like autism, at the very beginning of the design process,” said Paul Clark, senior vice-president and EMEA managing director at Poly. “At Poly, we’ve spent a lot of time making our solutions more accessible. For example, one of our customer’s employees is highly motivated to contribute but has Duchenne Muscular Dystrophy and was self-conscious about the loud, high-pitched noises that his ventilator made during calls. Poly’s NoiseBlock AI technology has been built into all of our headsets and video bars to minimise non-human sounds. Our personal video bar was able to tell that the ventilator noises were not speech and blocked them out. “Simple solutions like raised volume buttons enable the user to recognise controls by touch instead of sight. Brands should also consider ease of use and comfort for people who wear headdress, for example.


Driving network transformation with unified communications

As with most digital processes, cybersecurity remains a primary concern for businesses. With the increased use of UC platforms, such as Microsoft Teams, new security challenges are emerging. And quite often these vulnerabilities come from actions that we do not think twice about. Video recordings, for example, often contain sensitive and confidential information that could prove detrimental if discovered outside of the company. Yet, these recordings are typically stored in a server, or downloaded onto a desktop without much consideration. In addition to threats against sensitive content and data, real time collaboration can cause security weaknesses. With the right tools, criminals could acquire the necessary link to access private conferences and documents on a UC platform. Whether to simply eavesdrop or cause disruption, this breach could result in a number of consequences, both in the short and long term. Again, these calls and documents may contain confidential details which could be exploited by criminals if leaked. Disruptions to conferences will not only cause frustrations at the time, but also potentially damage the reputation of organizations.



Quote for the day:

"Keep your fears to yourself, but share your courage with others." -- Robert Louis Stevenson