Showing posts with label smart city. Show all posts
Showing posts with label smart city. Show all posts

Daily Tech Digest - July 02, 2024

The Changing Role of the Chief Data Officer

The chief data officer originally played more “defense” than “offense.” The position focused on data security, fraud protection, and Data Governance, and tended to attract people from a technical or legal background. CDOs now may take on a more offensive strategy, proactively finding ways to extract value from the data for the benefit of the wider business, and may come from an analytics or business background. Of course, in reality, the choice between offense and defense is a false one, as companies must do both. ... Major trends for CDOs in the future will include incorporating cutting-edge technology, such as generative AI, large language models, machine learning, and increasingly sophisticated forms of automation. The role is also spreading to a wider variety of industry sectors, such as healthcare, the private sector, and higher education. One of the major challenges is already in progress: responding to the COVID-19 pandemic. The pandemic hugely shook global supply chains, created new business markets, and also radically changed the nature of business itself. 


Duplicate Tech: A Bottom-Line Issue Worth Resolving

The patchwork nature of combined technologies can hinder processes and cause data fragmentation or loss. Moreover, differing cybersecurity capabilities among technologies can expose the organization to increased risk of cyberattacks, as older or less secure systems may be more vulnerable to breaches. Retaining multiple technologies may initially seem prudent in a merger or acquisition, but ultimately it proves detrimental. The drawbacks — from duplicated data and disconnected processes to inefficiencies and security vulnerabilities — far outweigh any perceived benefits, highlighting the critical need for streamlined, unified IT systems. ... There are compelling reasons to remove the dead weight of duplicate technologies and adopt a singular technology. The first step in eliminating tech redundancy is to evaluate existing technologies to determine which tools best align with current and future business needs. A collaborative approach with all relevant stakeholders is recommended to ensure the chosen solution supports organizational goals and avoids unnecessary repetition.


Disability community has long wrestled with 'helpful' technologies—lessons for everyone in dealing with AI

This disability community perspective can be invaluable in approaching new technologies that can assist both disabled and nondisabled people. You can't substitute pretending to be disabled for the experience of actually being disabled, but accessibility can benefit everyone. This is sometimes called the curb-cut effect after the ways that putting a ramp in a curb to help a wheelchair user access the sidewalk also benefits people with strollers, rolling suitcases and bicycles. ... Disability advocates have long battled this type of well-meaning but intrusive assistance—for example, by putting spikes on wheelchair handles to keep people from pushing a person in a wheelchair without being asked to or advocating for services that keep the disabled person in control. The disabled community instead offers a model of assistance as a collaborative effort. Applying this to AI can help to ensure that new AI tools support human autonomy rather than taking over. A key goal of my lab's work is to develop AI-powered assistive robotics that treat the user as an equal partner. We have shown that this model is not just valuable, but inevitable. 


What is the Role of Explainable AI (XAI) In Security?

XAI in cybersecurity is like a colleague who never stops working. While AI helps automatically detect and respond to rapidly evolving threats, XAI helps security professionals understand how these decisions are being made. “Explainable AI sheds light on the inner workings of AI models, making them transparent and trustworthy. Revealing the why behind the models’ predictions, XAI empowers the analysts to make informed decisions. It also enables fast adaptation by exposing insights that lead to quick fine-tuning or new strategies in the face of advanced threats. And most importantly, XAI facilitates collaboration between humans and AI, creating a context in which human intuition complements computational power.,” Kolcsár added. ... With XAI working behind the scenes, security teams can quickly discover the root cause of a security alert and initiate a more targeted response, minimizing the overall damage caused by an attack and limiting resource wastage. As transparency allows security professionals to understand how AI models adapt to rapidly evolving threats, they can also ensure that security measures are consistently effective. 


10 ways AI can make IT more productive

By infusing AI into business processes, enterprises can achieve levels of productivity, efficiency, consistency, and scale that were unimaginable a decade ago, says Jim Liddle, CIO at hybrid cloud storage provider Nasuni. He observes that mundane repetitive tasks, such as data entry and collection, can be easily handled 24/7 by intelligent AI algorithms. “Complex business decisions, such as fraud detection and price optimization, can now be made in real-time based on huge amounts of data,” Liddle states. “Workflows that spanned days or weeks can now be completed in hours or minutes.”  “Enterprises have long sought to drive efficiency and scale through automation, first with simple programmatic rules-based systems and later with more advanced algorithmic software,” Liddle says.  ... “By reducing boilerplating, teams can save time on repetitive tasks while automated and enhanced documentation keeps pace with code changes and project developments.” He notes that AI can also automatically create pull requests and integrate with project management software. Additionally, AI can generate suggestions to resolve bugs, propose new features, and improve code reviews.


How Tomorrow's Smart Cities Will Think For Themselves

When creating a cognitive city, the fundamental need is to move the computing power to where data is generated: where people live, work and travel. That applies whether you’re building a totally new smart city or retrofitting technology to a pre-existing ‘brownfield’ city. Either way, edge is key here. You’re dealing with information from sensors in rubbish bins, drains, and cameras in traffic lights. ... But in years to come the city itself will respond dynamically to the changing physical world, adjusting energy use in real-time to respond to the weather, for example. The evolution of monitoring has come from a machine-to-machine foundation, with the introduction of the Internet of Things (IoT) and now artificial intelligence (AI) becoming transformational in enabling smart technologies to become dynamic. Emerging AI technologies such as large language models will also play a role going forward, making it easy for both city planners and ordinary citizens to interact with the city they live in. Edge will be the key ingredient which gives us effective control of these cities of the future.


Serverless cloud technology fades away

The meaning of serverless computing became diluted over time. Originally coined to describe a model where developers could run code without provisioning or managing servers, it has since been applied to a wide range of services that do not fit its original definition. This led to a confusing loss of precision. It’s crucial to focus on the functional characteristics of serverless computing. The elements of serverless—agility, cost-efficiency, and the ability to rapidly deploy and scale applications—remain valuable. It’s important to concentrate on how these characteristics contribute to achieving business goals rather than becoming fixated on the specific technologies in use. Serverless technology will continue to fade into the background due to the rise of other cloud computing paradigms, such as edge computing and microclouds. ... The explosion of generative AI also contributed to the shifting landscape. Cloud providers are deeply invested in enabling AI-driven solutions, which often require specialized computer resources and significant data management capabilities, areas where traditional serverless models may not always excel.


Infrastructure-as-code and its game-changing impact on rapid solutions development

Automation is one of the main benefits of adopting an IaC approach. By automating infrastructure provisioning, IaC allows configuration to be accomplished at a faster pace. Automation also reduces the risk of errors that can result from manual coding, empowering greater consistency by standardizing the development and deployment of the infrastructure. ... Developers can rapidly assemble and deploy its infrastructure blocks, reusing them as needed throughout the development process. When adjustments are needed, developers can simply update the code the blocks are built on rather than making manual one-off changes to infrastructure components. Testing and tracking are more streamlined with IaC since the IaC code serves as a centralized and readily accessible source for documentation on the infrastructure. It also streamlines the testing process, allowing for automated unit testing of compliance, validation, and other processes before deploying. Additionally, IaC empowers developers to take advantage of the benefits provided by cloud computing. It facilitates direct interaction with the cloud’s exposed API, allowing developers to dynamically provision, manage, and orchestrate resources.


What is Multimodal AI? Here’s Everything You Need to Know

Multimodal AI describes artificial intelligence systems that can simultaneously process and interpret data from various sources such as text, images, audio, and video. Unlike traditional AI models that depend on a single type of data, multimodal AI provides a holistic approach to data processing. ... Although multimodal AI and generative AI share similarities, they differ fundamentally. For instance, generative AI focuses on creating new content from a single type of prompt, such as creating images from textual descriptions. In contrast, multimodal AI processes and understands different sensory inputs, allowing users to input various data types and receive multimodal outputs. ... Multimodal AI represents a significant advancement in the field of artificial intelligence. Therefore, by understanding and leveraging this advanced technology, data scientists and AI professionals can pave the way for more sophisticated, context-aware, and human-like AI systems, ultimately enriching our interaction with technology and the world around us. 


Excel Enthusiast to Supply Chain Innovator – The Journey to Building One of the Largest Analytic Platforms

While ChatGPT has helped raise awareness about AI capabilities, explaining how to integrate AI has presented challenges, especially when managing over 200 different data analytic reports. To address the different uses, Miranda has simplified AI into three categories: rule-based AI, learning AI (machine learning), and generative AI. Generative AI has emerged as the most dynamic tool among the three for executing and recording data analytics. Its versatility and adaptability make it particularly effective in capturing and processing diverse data sets, contributing to more comprehensive analytics outcomes. Miranda says, “People in analytics might not jump out of bed excited to tackle documentation, but it's a critical aspect of our work. Without proper documentation, we risk becoming a single point of failure, which is something we want to avoid.” ... These recordings are then converted into transcripts and securely stored in a containerized environment, streamlining the documentation process while ensuring data security. Because of process automation, Miranda says that the organization generated 240,000 work hours last year, and they anticipate even more this year.



Quote for the day:

"Life is like riding a bicycle. To keep your balance you must keep moving." -- Albert Einstein

Daily Tech Digest - August 30, 2023

Generative AI Faces an Existential IP Reckoning of Its Own Making

Clearly, this situation is untenable, with a raft of dire consequences already beginning to emerge. Should the courts determine that generative AI firms aren’t protected by the fair use doctrine, the still-budding industry could be on the hook for practically limitless damages. Meanwhile, platforms like Reddit are beginning to aggressively push back against unchecked data scraping. ... These sorts of unintended externalities will only continue to multiply unless strong measures are taken to protect copyright holders. Government can play an important role here by introducing new legislation to bring IP laws into the 21st century, replacing outdated regulatory frameworks created decades before anyone could have predicted the rise of generative AI. Government can also spur the creation of a centralized licensing body to work with national and international rights organizations to ensure that artists, content creators, and publishers are being fairly compensated for the use of their content by generative AI companies.


6 hidden dangers of low code

The low-code sales pitch is that computers and automation make humans smarter by providing a computational lever that multiplies our intelligence. Perhaps. But you might also notice that, as people grow to trust in machines, we sometimes stop thinking for ourselves. If the algorithm says it’s the right thing to do, we'll just go along with it. There are endless examples of the disaster that can ensue from such thoughtlessness. ... When humans write code, we naturally do the least amount of work required, which is surprisingly efficient. We're not cutting corners; we're just not implementing unnecessary features. Low code solutions don’t have that advantage. They are designed to be one-size-fits-all, which in computer code means libraries filled with endless if-then-else statements testing for every contingency in the network. Low code is naturally less efficient because it’s always testing and retesting itself. This ability to adjust automatically is the magic that the sales team is selling, after all. But it’s also going to be that much less efficient than hand-tuned code written by someone who knows the business.


Applying Reliability Engineering to the Manufacturing IT Environment

To understand exposure to failure, the Reliability Engineers analyzed common failure modes across manufacturing operations, utilizing the Failure Mode and Effects Analysis (FMEA) methodology to anticipate potential issues and failures. Examples of common failure modes include “database purger/archiving failures leading to performance impact” and “inadequate margin to tolerate typical hardware outages.” The Reliability Engineers also identified systems that were most likely to cause factory impact due to risk from these shared failure modes. This data helped inform a Resiliency Maturity Model (RMM), which scores each common failure mode on a scale from 1 to 5 based on a system’s resilience to that failure mode. This structured approach enabled us to not just fix isolated examples of applications that were causing the most problems, but to instead broaden our impact and develop a reliability mindset. 


5 Skills All Marketing Analytics and Data Science Pros Need Today

Marketing analysts should hone their skills to know who to talk to – and how to talk to them – to secure the information they have. Trust Insights’ Katie Robbert says it requires listening and asking questions to understand what they know that you need to take back to your team, audience, and stakeholders. “You can teach anyone technical skills. People can follow the standard operating procedure,” she says. “The skill set that is so hard to teach is communication and listening.” ... By improving your communication skills, you’ll be well-positioned to follow Hou’s advice: “Weave a clear story in terms of how marketing data could and should guide the organization’s marketing team.” She says you should tell a narrative that connects the dots, explains the how and where of a return on investment, and details actions possible not yet realized due to limited lines of sight. ... Securing organization-wide support requires leaning into what the data can do for the business. “Businesspeople want to see the business outcomes. 


Neural Networks vs. Deep Learning

Neural networks, while powerful in synthesizing AI algorithms, typically require less resources. In contrast, as deep learning platforms take time to get trained on complex data sets to be able to analyze them and provide rapid results, they typically take far longer to develop, set up and get to the point where they yield accurate results. ... Neural networks are trained on data as a way of learning and improving their conclusions over time. As with all AI deployments, the more data it’s trained on the better. Neural networks must be fine-tuned for accuracy over and over as part of the learning process to transform them into powerful artificial intelligence tools. Fortunately for many businesses, plenty of neural networks have been trained for years – far before the current craze inspired by ChatGPT – and are now powerful business tools. ... Deep learning systems make use of complex machine learning techniques and can be considered a subset of machine learning. But in keeping with the multi-layered architecture of deep learning, these machine learning instances can be of various types and various strategies throughout a single deep learning application.


Ready or not, IoT is transforming your world

At its core, IoT refers to the interconnection of everyday objects, devices, and systems through the internet, enabling them to collect, exchange, and analyze data. This connectivity empowers us to monitor and control various aspects of our lives remotely, from smart homes and wearable devices to industrial machinery and city infrastructure. The essence of IoT lies in the seamless communication between objects, humans, and applications, making our environments smarter, more efficient, and ultimately, more convenient. ... Looking ahead, the future of IoT holds remarkable potential. Over the next five years, we can expect a multitude of advancements that will reshape industries and lifestyles. Smart cities will continue to evolve, leveraging IoT to enhance sustainability, security, and quality of life. The healthcare sector will witness even more personalized and remote patient monitoring, revolutionizing the way medical care is delivered. AI and automation will play a pivotal role, in driving efficiency and innovation across various domains.


What are network assurance tools and why are they important?

Without a network assurance tool at their disposal, many enterprises would be forced to limit their network reach and capacity. "They would be unable to take advantage of the latest technological advancements and innovations because they didn’t have the manpower or tools to manage them," says Christian Gilby, senior product director, AI-driven enterprise, at Juniper Networks. "At the same time, enterprises would be left behind by their competitors because they would still be utilizing manual, trial-and-error procedures to uncover and repair service issues." The popularity of network assurance technology is also being driven by a growing enterprise demand for network teams to do more with less. "Efficiency is needed in order to manage the ever-expanding network landscape," adds Gilby. New devices and equipment are constantly brought online and added to networks. Yet enterprises don’t have unlimited IT budgets, meaning that staffing levels often remain the same, even as workloads increase.


How tomorrow’s ‘smart cities’ will think for themselves

In the smart cities of the future, technology will be built to respond to human needs. Sustainability is the biggest problem facing cities – and by far the biggest contributor is the automobile. Smart cities will enable the move towards reducing traffic, and towards autonomous vehicles directed efficiently through the streets. Deliveries which are not successful the first time are one example. These are a key driver of congestion, as drivers have to return to the same address repeatedly. In a cognitive city, location data that shows when a customer is home can be shared anonymously with delivery companies – with their consent – so that more deliveries arrive on the first attempt. Smart parking will be another important way to reduce congestion and make the streets more efficient. Edge computing nodes will sense empty parking spaces and direct cars there in real-time. They will also be a key enabler for autonomous driving, delivering more data points to autonomous systems in cars. 


Navigating Your Path to a Career in Cyber Security: Practical Steps and Insights

Practical experience is critical in the field of cyber security. Seek opportunities to apply your knowledge and gain hands-on experience as often as you can. I recommend looking for internships, part-time jobs, or volunteer positions that allow you to work on real-world projects and develop practical skills. I cannot stress how important it is to understand the fundamentals. ... Networking is essential for finding job opportunities in any field, including cybersecurity. You should attend industry events and conferences (there are plenty of free ones) and try to meet as many professionals already working in the field as possible. Their insights will go a long way in your journey to finding the right role. There are also many online communities and forums you can join where cyber security experts gather to discuss trends, share knowledge, and explore job opportunities. Networking will help you gain insights, discover job openings, and even receive recommendations from industry professionals.


NCSC warns over possible AI prompt injection attacks

Complex as this may seem, some early developers of LLM-products have already seen attempted prompt injection attacks against their applications, albeit generally these have been either rather silly or basically harmless. Research is continuing into prompt injection attacks, said the NCSC, but there are now concerns that the problem may be something that is simply inherent to LLMs. This said, some researchers are working on potential mitigations, and there are some things that can be done to make prompt injection a tougher proposition. Probably one of the most important steps developers can take is to ensure they are architecting the system and its data flows so that they are happy with the worst-case scenario of what the LLM-powered app is allowed to do. “The emergence of LLMs is undoubtedly a very exciting time in technology. This new idea has landed – almost completely unexpectedly – and a lot of people and organisations (including the NCSC) want to explore and benefit from it,” wrote the NCSC team.



Quote for the day:

"When you practice leadership, the evidence of quality of your leadership, is known from the type of leaders that emerge out of your leadership" -- Sujit Lalwani

Daily Tech Digest - December 12, 2022

14 lessons CISOs learned in 2022

Ransomware attacks have increased in 2022, with companies and government entities among the most prominent targets. Nvidia, Toyota, SpiceJet, Optus, Medibank, the city of Palermo, Italy, and government agencies in Costa Rica, Argentina, and the Dominican Republic were among the victims in 2022, a year in which the lines between financially and politically motivated ransomware groups continued to be blurred. A critical piece of any organization's defense strategy should be employee awareness and training because "employees continue to be targeted in threat actor strategies through phishing and other social engineering means," says Gary Brickhouse, CISO at GuidePoint Security. ... Organizations should also do more to keep up with vulnerabilities in both open- and closed-source software. However, this is no easy task since thousands of bugs surface yearly. Vulnerability management tools can help identify and prioritize vulnerabilities found in operating systems applications.


Grow your own CIO: Building leadership and succession plans

To ensure the long-term health of the company, tech chiefs must focus on building up that middle tier of IT leaders, a reality many CIOs are only now recognizing the need to address. “There are not enough people out there — you have to develop your own people,’’ says Roberts, who estimates that only 10% to 20% of companies are “being intentional about doing formal development programs.’’ Mike Eichenwald, a senior client partner at Korn Ferry Consulting, agrees that it’s important to elevate individuals from vertical leadership roles within the pillars of infrastructure, engineering, product, and security to enterprise leadership roles. With technology converging in all aspects of the business, doing so will help organizations leverage the diversity of experience those midlevel managers have under their belts, and their learning curve and degree of risk will be minimized, Eichenwald says. “Unfortunately, organizations miss an opportunity to cultivate that talent internally and often find themselves needing to reach out to the [external] market to bring it in,’’ he adds.


Open source security fought back in 2022

Anyone paying attention to open source for the past 20 years—or even the past two—will not be surprised to see commercial interests start to flourish around these popular open source technologies. As has become standard, that commercial success is usually spelled c-l-o-u-d. Here's one prominent example: On December 8, 2022, Chainguard, the company whose founders cocreated Sigstore while at Google, released Chainguard Enforce Signing, which enables customers to use Sigstore-as-a-service to generate digital signatures for software artifacts inside their own organization using their individual identities and one-time-use keys. This new capability helps organizations ensure the integrity of container images, code commits, and other artifacts with private signatures that can be validated at any point an artifact needs to be verified. It also allows a dividing line where open source software artifacts are signed in the open in a public transparency log; however, enterprises can sign their own software with the same flow, but with private versions that aren’t in the public log. 


Turning the vision of a utopic smart city into reality

It’s critical to consider what success looks like, and this can be measured by how user-friendly and efficient a service is, as well as cost efficiencies. For instance, reducing the time to find a parking space in a new city from an hour to just a few minutes when using parking apps which can indicate spaces and process payment. It’s almost impossible to consider smart cities without thinking about the efficient energy management benefits of smart buildings. Sustainable initiatives such as integrated workplace management systems already have the capability to monitor over 50,000 data points per second, analyse data, and send it to mobile apps. This could see millions of users saving energy. With a long-term vision for smart city platforms to become unified or standardised, one solution can potentially work seamlessly anywhere in the world. Platforms could integrate city infrastructure and navigation, and access to emergency and city services. Transformation will be driven by users empowered with the right data, perhaps even according to their user type of student, tourist, or city resident.


Can real-time data visualisation deliver trust and opportunity?

What is interesting is that so much of this is driven through an ecosystem of partners. No one organisation can deliver the breadth and depth of data and tools needed to make such projects work and there is much to learn from that. Collaborations and partnerships can elevate and enhance real-time data visualisation and value. For many organisations however, real-time data is still virgin territory and real-time visualisation is one of those technologies where reality cannot hope to match expectation, at least according to Jaco Vermeulen, CTO of tech consultancy BML Digital. “Almost every customer says they want real-time visualisation, but then nine out of 10 can’t qualify why they need it, especially when it comes to what decisions or actions it will enable,” says Vermeulen. “This is usually because they start from the belief that the data is always available and therefore should be immediately understandable and yield profound insight. The truth is a bit more challenging.” ... “It is the real-time decisions that create impact,” he says. “Optimising supply chains, reducing waste and pollution, optimising operations, and informing and satisfying consumers. 


IBM’s Krishnan Talks Finding the Right Balance for AI Governance

The challenge comes essentially from not knowing how the sausage was made. One client, for instance, had built 700 models but had no idea how they were constructed or what stages the models were in, Krishnan said. “They had no automated way to even see what was going on.” The models had been built with each engineer’s tool of choice with no way to know further details. As result, the client could not make decisions fast enough, Krishnan said, or move the models into production. She said it is important to think about explainability and transparency for the entire life cycle rather than fall into the tendency to focus on models already in production. Krishnan suggested that organizations should ask whether the right data is being used even before something gets built. They should also ask if they have the right kind of model and if there is bias in the models. Further, she said automation needs to scale as more data and models come in. The second trend Krishan cited was the increased responsible use of AI to manage risk and reputation to instill and maintain confidence in the organization. 


13 tech predictions for 2023

“Different edges are implemented for different purposes. Edge servers and gateways may aggregate multiple servers and devices in a distributed location, such as a manufacturing plant. An end-user premises edge might look more like a traditional remote/branch office (ROBO) configuration, often consisting of a rack of blade servers. Telecommunications providers have their own architectures that break down into a provider far edge, a provider access edge, and a provider aggregation edge. ... As we enter 2023, CIOs have earned a seat among the decision-makers and are now at the helm of company-wide technology decision-making. Amid a volatile economic climate, IT leaders must prioritize reducing costs, but they are finding themselves pulled between contrasting concerns of managing spend, dealing with security risks, and fostering innovation. As they navigate an uncertain market, CIOs will need to analyze company usage, along with their previous experience, to rethink business approaches and make decisions. The goal is to identify ways to reduce spend across the company, but not at the expense of key areas like cybersecurity and innovation. 


Preventing a ransomware attack with intelligence: Strategies for CISOs

One of the most effective ways to stop a ransomware attack is to deny them access in the first place; without access, there is no attack. The adversary only needs one route of access, and yet the defender has to be aware and prevent all entry points into a network. Various types of intelligence can illuminate risk across the pre-attack chain—and help organizations monitor and defend their attack surfaces before they’re targeted by attackers. The best vulnerability intelligence should be robust and actionable. For instance, with vulnerability intelligence that includes exploit availability, attack type, impact, disclosure patterns, and other characteristics, vulnerability management teams predict the likelihood that a vulnerability could be used in a ransomware attack. With this information in hand, vulnerability management teams, who are often under-resourced, can prioritize patching and preemptively defend against vulnerabilities that could lead to a ransomware attack. Having a deep and active understanding of the illicit online communities where ransomware groups operate can also help inform methodology, and prevent compromise.


What to do when your devops team is downsized

If you lead teams or manage people, your first thought must be how they feel or how they are personally impacted by the layoffs. Some will be angry if they’ve seen friends and confidants let go; others may be fearful they’re next. Even when leadership does a reasonable job at communication (which is all too often not the case), chances are your teams and colleagues will have unanswered questions. Your first task after layoffs are announced is to open a dialogue, ask people how they feel, and dial up your active listening skills. Other steps to help teammates feel safe include building empathy for personal situations, energizing everyone around a mission, and thanking team members for the smallest wins. Use your listening skills to identify the people who have greater concerns and fears or who may be flight risks. You’ll want to talk to them individually and find ways to help them through their anxieties or recognize when they need professional help. You should also give people and teams time to reflect and adjust. Asking everyone to get back to their sprint commitments and IT tickets is insensitive and unrealistic, especially if the company laid off many people.


Our ChatGPT Interview Shows AI Future in Banking Is Scary-Good

ChatGPT is a large, advanced language processing model that is trained using a technique called generative pre-trained transformer, or GPT. This allows ChatGPT to generate human-like responses to questions and statements in a conversation, making it a powerful tool for a wide range of applications. Compared to traditional chatbots, which are often limited in their ability to understand and generate natural language, ChatGPT has the advantage of being able to provide more accurate and detailed responses. Additionally, because it is trained using a large amount of data, ChatGPT is able to learn and adapt to different conversational styles and contexts, making it more versatile and capable of handling a wider range of scenarios. ... The banking industry can use ChatGPT technology in a number of ways to improve their operations and provide better service to their customers. For example, ChatGPT can be used to automate customer service tasks, such as answering frequently asked questions or providing detailed information about products and services. This can free up customer service representatives to focus on more complex or high-value tasks, improving overall efficiency and customer satisfaction.



Quote for the day:

"Strong leaders encourage you to do things for your own benefit, not just theirs." -- Tim Tebow

Daily Tech Digest - October 14, 2022

Which cybersecurity metrics matter most to CISOs today?

Given the rapid increase in malware-free attacks, there’s a tendency on the part of cybersecurity teams to add more metrics. Seeing more reported data as a panacea for rising risks that aren’t immediately understood, cybersecurity teams will turn on as many metrics as possible, looking for clues. Relying on antivirus, SIEM (security information and event management), security ticketing systems, vulnerability scanners, and more, CISOs’ teams generate an overwhelming number of metrics that lack context. CISOs warn that presenting metrics straight from tools without a narrative supporting them is a mistake. C-level executives and the boards they report to are more focused on new insights that are contextually relevant than a series of tactical measures. Every new high-profile intrusion or breach drives up to a dozen or more internal user requests for new metrics. Managing user requests by how much value they provide to contextual intelligence and delivering business value is critical. CISOs tell VentureBeat it’s easy to say no to additional metrics requests when there is no connection to requested metrics that quantify the value cybersecurity delivers.


Making everything connect for smart cities

It’s a vision of how smart cities can be holistically planned by connecting the different city domains and addressing Sustainable Development Goals (SDGs) globally. In this way, mobility, energy, the environment, health, education, security and the economy are not treated separately, but rather as a whole consistent continuity of human-centric services. Smart cities need to be much better at creating an open platform of dialogue that is accessible to all citizens. ... These allow residents to engage with a wide array of data, as well as completing personal tasks like paying bills, finding efficient transportation and assessing energy consumption in the home. Smart cities also need to account for social infrastructure that provides a cultural fabric, making the city attractive to residents and offering a sense of local identity. It is often the social and cultural aspects of a city that citizens find makes it most attractive to live in – aspects such as green open spaces, a wide choice of retail outlets, and bustling nightlife. This is particularly important for cities that are being created ‘from scratch’ (rather than already existing) and need to find effective ways to attract residents.


Dell gets more edge-specific with Project Frontier platform

Dell also said it is expanding its current edge portfolio in the following ways: Edge analytics and operations - Manufacturers can optimize how they deploy edge applications with an Dell Validated Design for Manufacturing Edge, the company said. This now includes new Dell-validated partner applications to support advanced edge use cases, and improve factory processes and efficiencies, while reducing waste and raw materials usage for more sustainable operations. Manufacturers can respond quickly to changes in demand, and enable reconfigurable production lines with Dell's private 5G capability, Dell said. Edge computing and analytics - The PowerEdge XR4000 is the smallest server in the Dell lineup at about the size of a shoebox. The XR4000 is 60% shorter than conventional data center servers, and its multiple mounting options allow it to be installed in a rack, on walls or ceilings, saving valuable floor space. The multi-node, 2U chassis server can survive unpredictable conditions, such as heat waves or falls, the company said.


The White House can build on its AI Bill of Rights blueprint today

Several current uses of AI clearly violate the blueprint and should no longer be used. The president should also stop encouraging agencies to spend American Rescue Plan funds on ShotSpotter and other “gunshot detection” technologies, which change police behavior but have not been shown to decrease gun violence. These tools are in violation of the blueprint’s principles that AI tools must be safe, effective, nondiscriminatory, and transparent. ... On the legislative front, the AI Bill of Rights principles are embodied in both the American Data Privacy Protection Act and the Algorithmic Accountability Act of 2022, both of which the administration could put its support behind. There has been substantial investment in the development and adoption of AI, but nowhere near as much money or energy put toward safeguards or protection. We should not repeat the same self-regulatory mistakes made with social media and online advertising that left us in the privacy crisis we are in today. 


How intelligent automation changes CI/CD

Intelligent automation addresses many of the core requirements for successful software delivery. Basic process automation can increase devops productivity by automating routine manual tasks through code. For example, a developer can run a build in Jenkins that then triggers an automated task that pushes the build to Artifactory and kicks off a delivery pipeline. However, combining automation with AI-powered intelligence can turbocharge processes and improve business outcomes. Intelligent automation can automate routine tasks and then constantly improve automated decision making as the release moves through the delivery lifecycle. Intelligence applied to the release process — when combined with deep tools integrations that provide access not only to events but also to all process data — can automate the detection of software risks and automatically flag release candidates for remediation before they make it to production. In addition to increased devops productivity and faster and more accurate software releases, intelligent automation provides the means to implement centralized, automated control over compliance and security. 


A Big Threat for SMBs: Why Cybersecurity is Everyone’s Responsibility

It impacts everyone across every department and every element of operations. Cybersecurity is a collective responsibility. During this Cybersecurity Awareness Month, let’s debunk the pervasive misconception that cybersecurity is strictly an IT issue. To avoid becoming a statistic, SMBs need to develop a security culture that reinforces the idea that cybersecurity is the responsibility of every team member. From the founder who sets a security-focused tone to the specific teams that implement the policies, to the HR department responsible for onboarding new employees, to the IT team setting system password requirements, and to every employee that can potentially open a phishing email triggering a security incident, it’s a collective effort to stay aware. All individuals need to be trained, vigilant, and engaged. The devil is in the details, as it’s the tools, tasks, and routine activities each team member performs that will protect the company.


Seeing electron movement at fastest speed ever could help unlock next-level quantum computing

Seeing electrons move in increments of one quintillionth of a second could help push processing speeds up to a billion times faster than what is currently possible. In addition, the research offers a “game-changing” tool for the study of many-body physics. “Your current computer’s processor operates in gigahertz, that’s one billionth of a second per operation,” said Mackillo Kira, U-M professor of electrical engineering and computer science, who led the theoretical aspects of the study published in Nature. “In quantum computing, that’s extremely slow because electrons within a computer chip collide trillions of times a second and each collision terminates the quantum computing cycle. ... To see electron movement within two-dimensional quantum materials, researchers typically use short bursts of focused extreme ultraviolet (XUV) light. Those bursts can reveal the activity of electrons attached to an atom’s nucleus. But the large amounts of energy carried in those bursts prevent clear observation of the electrons that travel through semiconductors—as in current computers and in materials under exploration for quantum computers.


New data protection bill must enable a progressive data governance framework

The robust framework that vowed to safeguard the privacy of an individual’s data would have made the privacy design of the bill even more redundant. Consent and notice framework in the new Bill should be dealt with in such a way that it addresses the right to informational privacy while avoiding consent fatigue for consumers. For instance, individuals may receive innumerable privacy notifications causing consent fatigue; this issue was considered and acknowledged by the Justice Srikrishna committee report. Besides, from a business perspective, the cost of compliance, especially for small businesses, will be huge and may result in additional costs. The new personal data governance framework should focus on simplifying the consent and notice framework in such a manner that individuals can easily understand how and for what purpose is their personal data being processed. Besides, the new Bill must lay out better means and ways to obtain consent, which is inclusive, less tiresome, and efficient.


Emotional intelligence: How to create psychological safety for your IT team

The best leaders understand the complexities and imperfections of being human and are not afraid to present their true selves in the workplace. These leaders emanate compassion and encourage their team members to embrace and express their unique gifts and talents. Compassion cuts through mental constructs and perceptions. It begins when leaders examine and undo traditional rules, roles, and narratives that limit their thinking, decision-making, and worldview. Freedom from outdated narratives enables release, self-acceptance, and permission to bring one’s whole self to the workplace. Leaders who are driven by the needs of the ego struggle to let go of outdated competence, values, and skills. Marshall Goldsmith, one of the world’s foremost thought leaders on executive coaching, explains this perfectly in the title of his book, What Got You Here Won’t Get You There. The compulsive need to be right becomes more important than discovering new horizons, untapped potential, and possibilities. Self-righteousness creates a division between the self and the team, eroding trust.


Smart buildings may be your cybersecurity downfall

With the rise of IoT, a wave of adoption of IT and IoT solutions at all levels of building system architecture poses a serious cyber security issue. As it becomes increasingly difficult to distinguish between building automation systems and other systems used in companies and their infrastructures, more “cyber holes” tend to be left unmonitored. The use of insecure industrial protocols is another vulnerability that attackers take advantage of to disrupt smart buildings operations. This is especially the case for building automation systems. Popular protocols like BACnet and LonWorks are not implicitly secure and, like those used in the industrial production sector, tend to have their own vulnerabilities. ... As the cyber-physical equipment within buildings becomes increasingly distributed, especially due to the new trend of supervising building complexes from a central location, cyberattacks on smart buildings, as well as attacks on cities and other smart city infrastructures, can have a significant security impact for users.



Quote for the day:

"Personal leadership is the process of keeping your vision and values before you and aligning your life to be congruent with them." -- Stephen R. Covey

Daily Tech Digest - September 18, 2022

5 ways to secure devops

Devops workflows are designed for speed and rapidly iterating with the latest requirements and performance improvements. Gate reviews are static. The tools devops teams rely on for security testing can lead to roadblocks, given their gate-driven design. Devops is a continuous process in high-performance IT teams, while stage gates slow the pace of development. Devops leaders often don’t have the time to train their developers to integrate security from the initial phases of a project. The challenge is how few developers are trained on secure coding techniques. Forrester’s latest report on improving code security from devops teams looked at the top 50 undergraduate computer science programs in the US, as ranked by US News and World Report for 2022, and found that none require secure coding or a secure application design class. CIOs and their teams are stretched thin with the many digital transformation initiatives, support for virtual teams and ongoing infrastructure support projects they have going on concurrently. CIOs and CISOs also face the challenges of keeping their organizations in regulatory compliance with more complex audit and reporting requirements. 


Designing APIs for humans: Error messages

The status code of the response should already tell you if an error happened or not, the message needs to elaborate so you can actually fix the problem. It might be tempting to have deliberately obtuse messages as a way of obscuring any details of your inner systems from the end user; however, remember who your audience is. APIs are for developers and they will want to know exactly what went wrong. It’s up to these developers to display an error message, if any, to the end user. Getting an “An error occurred” message can be acceptable if you’re the end user yourself since you’re not the one expected to debug the problem (although it’s still frustrating). As a developer there’s nothing more frustrating than something breaking and the API not having the common decency to tell you what broke. ... Letting you know what the error was is the bare minimum, but what a developer really wants to know is how to fix it. A “helpful” API wants to work with the developer by removing any barriers or obstacles to solving the problem. The message “Customer not found” gives us some clues as to what went wrong, but as API designers we know that we could be giving so much more information here.


Arm Neoverse roadmap targets enterprise infrastructure, cloud

"Compute workloads are on a relentless march higher, and becoming more complex," said Chris Bergey, senior vice president and general manager of Arm's infrastructure line of business, at a press briefing. "Machine learning and AI are taking over the future, and so infrastructure will look nothing like the past." Over the next year, Arm will work closely with its cloud and software partners to optimize cloud-native software infrastructure, frameworks and workloads. These partnerships include contributions to projects including Kubernetes and Istio, along with several CI/CD tools used for creating cloud-native software for the Arm architecture. Arm will also work to improve machine learning frameworks such as TensorFlow and a number of workloads such as big data, analytics and media processing. The company is moving into more traditional enterprise spaces now, Bergey said, noting the work it has done with VMware on its Project Monterey and providing support for Red Hat's OpenShift and SAP's HANA. "These cloud providers all use GPUs to underpin their cloud workloads, and the majority of them are using Arm," Bergey said.


How quantum physicists are looking for life on exoplanets

So, some of the biggest things in the universe are certainly quantum mechanical, including supermassive blackholes which can lose energy through a quantum phenomenon known as Hawking radiation. The second point is one often thinks quantum deals with very low temperatures. Again, to take our sun as an example—it's very hot, but that's quantum mechanical. Low temperature doesn't serve as a requirement for quantum. This example of a star and the quantumness of the fusion process and the high temperatures associated with that—I just want to broaden the view of what quantum mechanics is and how ubiquitous it is. ... It's quite amazing that we can determine what is in these planets' atmospheres—planets that would be impossible for humans to ever visit. That, and we can look for signatures of life, like, are there molecules that we associate with life floating around in these planets, at least if it's Earth-like life; then we might be able to determine with some probability that some planet way out there that no human could ever visit, harbors life. Or maybe we could discover other candidate forms of life.


How Is Platform Engineering Different from DevOps and SRE?

Over time, thought leaders came up with different metrics for organizations to gauge the success of their DevOps setup. The DevOps bible, “Accelerate,” established lead time, deployment frequency, change failure rate and mean time to recovery (MTTR) as standard metrics. Reports like the State of DevOps from Puppet and Humanitec’s DevOps benchmarking study used these metrics to compare top-performing organizations to low-performing organizations and deduce which practices contribute most to their degree of success. DevOps unlocked new levels of productivity and efficiency for some software engineering teams. But for many organizations, DevOps adoption fell short of their lofty expectations. Manuel Pais and Matthew Skelton documented these anti-patterns in their book “DevOps Topologies.” In one scenario, an organization tries to implement true DevOps and removes dedicated operations roles. Developers are now responsible for infrastructure, managing environments, monitoring, etc., in addition to their previous workload. Often senior developers bear the brunt of this shift, either by doing the work themselves or by assisting their junior colleagues.


The Cyber Security Head Game

Just as the predators of the fish below are never going to go away (which is why this fish camoflages itself and sports huge fake eyes to scare predators), cyber predators also will never go away. And the best of these cyber predators will continue to penetrate even the strongest defenses, because the exponential increase in IT system complexity, which makes it increasingly difficult to even understand the full extent of what you're defending, favors cyber attackers over cyber defenders. So we need to assume that some hackers will inevitably get inside our networks and thus we must adopt strategies of deception, similar to those employed successfully by our fish here, to lessen the harm from competent hackers, who manage to get up close and personal. We also need to create doubt in hackers’ minds, about the benefits of attacking us in the first place, in the same way that the poisonous Cane toad avoids attacks from predators who know the toad’s skin has lethal poison glands, and milk snakes, who have no poison, but discourage would-be predators by mimicking the coloration of coral snakes, who definitely do have deadly venom.


US Cyber-Defense Agency Urges Companies to Automate Threat Testing

Automated threat testing is still not very widespread, according to the official, who added that organizations sometimes don’t really follow through after deploying expensive tools on their network and instead just assume they’re doing the job. Automating security controls will make it easier to stop attackers from relying on established tactics. The top threat actors are still going back and leveraging vulnerabilities that are up to 10 years and older, warned the CISA official. CISA is making the recommendation in collaboration with the Center for Threat-Informed Defense, a 29-member nonprofit formed in 2019 that draws on MITRE’s framework. Iman Ghanizada, global head of autonomic security operations at Google Cloud, a research sponsor of the Center, said automated testing is important for creating continuous feedback loops that can steadily improve protection. “Whether you are a large company or a startup, you have to have visibility, analytics, response and continuous feedback,” he said.


Smart Cities: Mobility ecosystems for a more sustainable future

Although every city is different, leading cities are becoming smarter through their participation in large, complex, digitally enabled ecosystems. The question for many urban leaders, however, is how to engage with them effectively. Our experience in working with large transportation and communications clients yields a multilayered model and approach to guide the design and management of urban mobility systems. Given the interconnected nature of the building blocks of mobility, each layer—demand, supply, and foundational—is critical. Cities must understand and manage all the interactions and interdependencies. For example, demand for different forms of transportation is enabled via available modes of transit and supporting infrastructure. None of these would be possible without regulations, financing, insurance, and innovation. ... To achieve its vision of becoming a 45-minute city, Singapore is focusing on building its infrastructure (e.g., it is building intermodal mobility hubs to allow commuters to move seamlessly from one mode of transportation to another). The city is developing a robust innovation ecosystem, collaborating with many private-sector players. 


How to Draw and Retain Top Talent in Cyber Security

Before you introduce policies to increase diversity, you need to know who is currently applying. Gather data on applicants to establish if you need to take proactive steps to attract specific groups – you can’t make rational business decisions without data. Analyze job descriptions to eliminate bias so you aren’t deterring anyone. Review the language -- are you unconsciously drafting job advertisements and application forms with a white male in mind? Consider a post-application survey so you can establish what is appealing to recruits and what might cause them to drop out. You’ll be surprised how many people want to share their feedback because a negative job application process can deter an applicant for good, and you could be missing out on the best talent through ignorance. We implemented an Applicant Tracking System to understand the sources our candidates are coming from, see how diverse the candidate pool is (or not), and improve the candidate experience by being able to track how their process progresses and ends. ... Once you’ve got these cyber professionals on board, you need to keep them. 


Why shift left is burdening your dev teams

Security and compliance challenges are a significant barrier to most organizations’ innovation strategies, according to CloudBees. The survey also reveals agreement among C-suite executives that a shift left security strategy is a burden on dev teams. 76% of C-suite executives say that compliance challenges and security challenges (75%) limit their company’s ability to innovate. This is due, in part, to the significant time spent on compliance audits, risks, and defects. At the same time, C-suite executives overwhelmingly favor a shift left approach, a strategy of moving software testing and evaluation to earlier in the development lifecycle, placing the burden of compliance on development teams. In fact, 83% of C-suite executives say the approach is important for them as an organization, and 77% say they are currently implementing a shift left security and compliance approach. This is despite 58% of C-suite executives reporting that shift left is a burden on their developers. “These survey findings underscore the urgent need to transform the software security and compliance landscape. 



Quote for the day:

"Courage is the ability to execute tasks and assignments without fear or intimidation." -- Jaachynma N.E. Agu

Daily Tech Digest - December 24, 2021

A CIO’s Guide To Hybrid Work

CIOs reimagining an organization’s digital strategy need to ensure that their employees can communicate effectively and have complete access to resources needed to perform their jobs. This means that employees do not receive just their laptops and an email account but have full access to a complete tech stack and set of solutions that empower them to interact with their peers and customers. AI- and ML-powered solutions help enhance the employee experience by saving time for people to connect with their teams and helping infuse mental well-being along with a company’s values and purpose. The best way to understand whether your employees are well supported to carry on their job is by gathering feedback from them. Send out a simple form with both open and closed questions on the potential communication gaps, remote work support and access to available resources. Once you have all the information, analyze the gaps and improvement opportunities to pick the right tools. Make sure that the tools you choose integrate with your organization’s tech ecosystem while delivering value.


Whatever Happened to Business Supercomputers?

Supercomputers are primarily used in areas in which sizeable models are developed to make predictions involving a vast number of measurements, notes Francisco Webber, CEO at Cortical.io, a firm that specializes in extracting value from unstructured documents. “The same algorithm is applied over and over on many observational instances that can be computed in parallel," says Webber, hence the acceleration potential when run on large numbers of CPUs.” Supercomputer applications, he explains, can range from experiments in the Large Hadron Collider, which can generate up to a petabyte of data per day, to meteorology, where complex weather phenomena are broken down to the behavior of myriads of particles. There's also a growing interest in graphics processing unit (GPU)-and tensor processing unit (TPU)-based supercomputers. “These machines may be well suited to certain artificial intelligence and machine learning problems, such as training algorithms [and] analyzing large volumes of image data,” Buchholz says.


The State of Hybrid Workforce Security 2021

The time is right for IT leaders to turn to their teams and gain a clear understanding of what they actually have in place. While the initial response to the pandemic was reactionary, now is a moment to assess an organization’s app and security landscape and what is actually providing access to users no matter where they are, whether they’re at home, in the branch, or anywhere in between. Rationalizing the purpose and usage of solutions that are in place today provides a real opportunity for consolidation—one that did not seriously exist previously. Many organizations will be able to drive better outcomes around security posture, reducing risk, and improving total cost of ownership. Consolidating the number of disparate tools in use to provide secure user access improves security posture consistency and reduces the number of policies that have to be administered. Besides reducing needed multi-product training and management effort, a platform approach drives better economies of scale, resulting in a lower total cost of ownership. Net-net, consolidation delivers a far more effective approach for security.


What is Web3, is it the new phase of the Internet and why are Elon Musk and Jack Dorsey against it?

In the Web3 world, search engines, marketplaces and social networks will have no overriding overlord. So you can control your own data and have a single personalised account where you could flit from your emails to online shopping and social media, creating a public record of your activity on the blockchain system in the process. A blockchain is a secure database that is operated by users collectively and can be searched by anyone. People are also rewarded with tokens for participating. It comes in the form of a shared ledger that uses cryptography to secure information. This ledger takes the form of a series of records or “blocks” that are each added onto the previous block in the chain, hence the name. Each block contains a timestamp, data, and a hash. This is a unique identifier for all the contents of the block, sort of like a digital fingerprint. ... The idea of a decentralised internet may sound far-fetched but big tech companies are already betting big on it and even assembling Web3 teams.


Will A.I. Guarantees Our Humane Futures?

Both private firms and governments, which would be adopting A.I. drove technologies, could be attracted to the opportunity of violating the individual’s privacy and data security for their own selfish reasons. Large private corporations, especially technology and social media companies such as the big four of the big tech, which includes Google, Amazon, Apple, and Facebook, they’re already sitting on massive quantities of user data, which they’re looking to monetize, and such monetization of data in the name of customized services and targeted advertisements could have a disastrous impact on the user’s privacy and data security. The bigger threat will emerge when such sensitive user data is misused for social engineering to alter the customer's behavior and choices. ... Today, algorithms are so sophisticated that they can predict the user's next action based on their private data analysis. It’s very much possible to make use of such user data to nudge the individual discretely to alter his behavior and choices, and this has far-reaching implications for the economy, for society, and as well as for the security of a democratic nation.


Protection against the worst consequences of a cyberattack

Businesses need an incident response plan that will clearly outline the steps to be followed when a data breach occurs. By neglecting to do so, the organization will become the low hanging fruit that attackers go after. Even a rudimentary plan is better than no plan at all, and those without one will suffer a much higher impact. The incident response plan needs to outline the steps to be followed when a data breach occurs. Teams need to identify and classify data to understand what levels of protection are needed, a step that is regrettably missed all the time. For instance, personal identifiable customer information needs a different level of protection to the photos from the last Christmas party. Teams also need to maintain cyber hygiene through regular patching, and since 90% of breaches start with an email, it is very important to have email protection, multi-factor authentication and end-point protection to prevent any lateral movements by cybercriminals. Perhaps my biggest piece of advice is to have experienced personnel monitoring your environment 24/7, 365 days a year (including Christmas). 


Initial access brokers: How are IABs related to the rise in ransomware attacks?

Initial access brokers sell access to corporate networks to any person wanting to buy it. Initially, IABs were selling company access to cybercriminals with various interests: getting a foothold in a company to steal its intellectual property or corporate secrets (cyberespionage), finding accounting data allowing financial fraud or even just credit card numbers, adding corporate machines to some botnets, using the access to send spam, destroying data, etc. There are many cases for which buying access to a company can be interesting for a fraudster, but that was before the ransomware era. ... Ransomware groups saw an opportunity here to suddenly stop spending time on the initial compromise of companies and to focus on the internal deployment of their ransomware and sometimes the complete erasing of the companies' backup data. The cost for access is negligible compared with the ransom that is demanded of the victims. IAB activities became increasingly popular in the cybercriminal underground forums and marketplaces. 


8 Real Ways CIOs Can Drive Sustainability, Fight Climate Change

The concept of the circular economy has been around for a while, but it’s now taking off in a big way. NTT’s Lombard says that it’s a key to getting to net zero. This means establishing business and IT supply chains that focus on optimizing the lifespan of equipment, moving toward zero-emission closed loop recycling and curtailing e-waste. For example, there’s a growing second-hand market for high-end gear, including hyperscale infrastructure. Companies like IT Renew recertify these systems and place them under warranty. “Everyone wins,” says Lucas Beran, principal analyst at consulting firm Dell’Oro Group. “The original user gets two or three years of use; the buyer gets another three or four years -- all while TCO and the carbon footprint drop.” ... Data centers are expected to consume about 8% of the world's electricity by 2030. While refreshing legacy servers, optimizing data, virtualizing workloads, consolidating virtual machines and green hosting all deliver benefits, these strategies aren’t enough to tackle climate change. Organizations must fundamentally rethink data center design and function.


How Safety Became One of The Most Critical Smart City Applications

For cities, it can be challenging to ensure citizen and worker safety when natural disasters occur. Incidents such as hurricanes, floods, fires and gas leaks are unpredictable and often impossible to prevent. To put it in perspective, most people have lived through some disaster, with 87% of consumers saying they’ve been impacted by one in the last five years (not counting the COVID pandemic). Safety will only become more critical over the next few decades as natural disasters are becoming more frequent, intense and costly. Since 1970, the number of disasters worldwide has more than quadrupled to around 400 a year. Since 1998, natural disasters worldwide have killed more than 1.3 million people and left another 4.4 billion injured, homeless, displaced, or in need of emergency assistance. Smart sensors and advanced analytics can help communities better predict, prepare and respond to these emergency situations. For example, IoT sensors, such as pole tilt, electric distribution line, leak detection and air quality sensors, can be leveraged to mitigate risk minimize damage.


Avoiding Technical Bankruptcy: a Whole-Organization Perspective on Technical Debt

It is regrettable that the meaning of the technical debt metaphor has been diluted in this way, but in language as in life in general, pragmatics trump intentions. This is where we are: what counts as "technical debt" is largely just the by-product of normal software development. Of course, no-one wants code problems to accumulate in this way, so the question becomes: why do we seem to incur so much inadvertent technical debt? What is it about the way we do software development that leads to this unwanted result? These questions are important, since if we can go into technical debt, then it follows that we can become technically insolvent and go technically bankrupt. In fact, this is exactly what seems to be happening to many software development efforts. Ward Cunningham notes that "entire engineering organizations can be brought to a stand-still under the debt load of an unconsolidated implementation". That stand-still is technical bankruptcy.



Quote for the day:

“When you take risks you learn that there will be times when you succeed and there will be times when you fail, and both are equally important.” -- Ellen DeGeneres