Showing posts with label smart city. Show all posts
Showing posts with label smart city. Show all posts

Daily Tech Digest - September 20, 2025


Quote for the day:

"It is easy to lead from the front when there are no obstacles before you, the true colors of a leader are exposed when placed under fire." -- Mark W. Boyer


Five forces shaping the next wave of quantum innovation

Quantum computers are expected to solve problems currently intractable for even the world’s fastest supercomputers. Their core strengths — efficiently finding hidden patterns in complex datasets and navigating vast optimization challenges — will enable the design of novel drugs and materials, the creation of superior financial algorithms and open new frontiers in cryptography and cybersecurity. ... The quantum ecosystem now largely agrees that simply scaling up today’s computers, which suffer from significant noise and errors that prevent fault-tolerant operation, won’t unlock the most valuable commercial applications. The industry’s focus has shifted to quantum error correction as the key to building robust and scalable fault-tolerant machines. ... Most early quantum computing companies tried a full-stack approach. Now that the industry is maturing, a rich ecosystem of middle-of-the-stack players has emerged. This evolution allows companies to focus on what they do best and buy components and capabilities as needed, such as control systems from Quantum Machines and quantum software development from firms ... recent innovations in quantum networking technology have made a scale-out approach a serious contender. 


Post-Modern Ransomware: When Exfiltration Replaces Encryption

Exfiltration-first attacks have re-written the rules, with stolen data providing criminals with a faster, more reliable payday than the complex mechanics of encryption ever could. The threat of leaking data like financial records, intellectual property, and customer and employee details delivers instant leverage. Unlike encryption, if the victim stands firm and refuses to pay up, criminal groups can always sell their digital loot on the dark web or use it to fuel more targeted attacks. ... Phishing emails, once known for being riddled with tell-tale grammar and spelling mistakes, are now polished, personalized and delivered in perfect English. AI-powered deepfake voices and videos are providing convincing impersonations of executives or trusted colleagues that have defrauded companies for millions. At the same time, attackers are deploying custom chatbots to manage ransom negotiations across multiple victims simultaneously, applying pressure with the relentless efficiency of machines. ... Yet resilience is not simply a matter of dashboards and detection thresholds – it is equally about supporting those on the frontlines. Security leaders already working punishing hours under relentless scrutiny cannot be expected to withstand endless fatigue and a culture of blame without consequence. Organizations must also embed support for their teams into their response frameworks, from clear lines of communication and decompression time to wellbeing checks. 


The Data Sovereignty Challenge: How CIOs Are Adapting in Real Time

The uncertainty is driving concern. “There's been a lot more talk around, ‘Should we be managing sovereign cloud, should we be using on-premises more, should we be relying on our non-North American public contractors?” said Tracy Woo, a principal analyst with researcher and advisory firm Forrester. Ditching a major public cloud provider over sovereignty concerns, however, is not a practical option. These providers often underpin expansive global workloads, so migrating to a new architecture would be time-consuming, costly, and complex. There also isn’t a simple direct switch that companies can make if they’re looking to avoid public cloud; sourcing alternatives must be done thoughtfully, not just in reaction to one challenge. ... “There's a nervousness around deployment of AI, and I think that nervousness comes from -- definitely in conversations with other CIOs -- not knowing the data,” said Bell. Although decoupling from the major cloud providers is impractical on many fronts, issues of sovereignty as well as cost could still push CIOs to embrace a more localized approach, Woo said. “People are realizing that we don't necessarily need all the bells and whistles of the public cloud providers, whether that's for latency or performance reasons, or whether it's for cost or whether that's for sovereignty reasons,” explained Woo. 


Enterprise AI enters the age of agency, but autonomy must be governed

Agentic AI systems don’t just predict or recommend, they act. These intelligent software agents operate with autonomy toward defined business goals, planning, learning, and executing across enterprise workflows. This is not the next version of traditional automation or static bots. It’s a fundamentally different operating paradigm, one that will shape the future of digital enterprises. ... For many enterprises, the last decade of AI investment has focused on surfacing insights: detecting fraud, forecasting demand, and predicting churn. These are valuable outcomes, but they still require humans or rigid automation to respond. Agentic AI closes that gap. These agents combine machine learning, contextual awareness, planning, and decision logic to take goal-directed action. They can process ambiguity, work across systems, resolve exceptions, and adapt over time. ... Agentic AI will not simply automate tasks. It will reshape how work is designed, measured, and managed. As autonomous agents take on operational responsibility, human teams will move toward supervision, exception resolution, and strategic oversight. New KPIs will emerge, not just around cost or cycle time, but around agent quality, business impact, and compliance resilience. This shift will also demand new talent models. Enterprises must upskill teams to manage AI systems, not just processes. 


Cybersecurity in smart cities under scrutiny

The digital transformation of public services involves “an accelerated convergence between IT and OT systems, as well as the massive incorporation of connected IoT devices,” she explains, which gives rise to challenges such as an expanding attack surface or the coexistence of obsolete infrastructure with modern ones, in addition to a lack of visibility and control over devices deployed by multiple providers. ... “According to the European Cyber ​​Security Organisation, 86% of European local governments with IoT deployments have suffered some security breach related to these devices,” she says. Accenture’s Domínguez adds that the challenge is to consider “the fragmentation of responsibilities between administrations, concessionaires, and third parties, which complicates cybersecurity governance and requires advanced coordination models.” De la Cuesta also emphasizes the siloed nature of project development, which significantly hinders the development of an active cybersecurity strategy. ... In the integration of new tools, despite Spain holding a leading position in areas such as 5G, “technology moves much faster than the government’s ability to react,” he says. “It’s not like a private company, which has a certain agility to make investments,” he explains. “Public administration is much slower. Budgets are different. Administrative procedures are extremely long. From the moment a project is first discussed until it is actually executed, many years pass.”


Your SDLC Has an Evil Twin — and AI Built It

Welcome to the shadow SDLC — the one your team built with AI when you weren't looking: It generates code, dependencies, configs, and even tests at machine speed, but without any of your governance, review processes, or security guardrails. ... It’s not just about insecure code sneaking into production, but rather about losing ownership of the very processes you’ve worked to streamline. Your “evil twin” SDLC comes with: Unknown provenance → You can’t always trace where AI-generated code or dependencies came from. Inconsistent reliability → AI may generate tests or configs that look fine but fail in production. Invisible vulnerabilities → Flaws that never hit a backlog because they bypass reviews entirely. ... AI assistants are now pulling in OSS dependencies you didn’t choose — sometimes outdated, sometimes insecure, sometimes flat-out malicious. While your team already uses hygiene tools like Dependabot or Renovate, they’re only table stakes that don’t provide governance. ... The “evil twin” of your SDLC isn’t going away. It’s already here, writing code, pulling dependencies, and shaping workflows. The question is whether you’ll treat it as an uncontrolled shadow pipeline — or bring it under the same governance and accountability as your human-led one. Because in today’s environment, you don’t just own the SDLC you designed. You also own the one AI is building — whether you control it or not.


'ShadowLeak' ChatGPT Attack Allows Hackers to Invisibly Steal Emails

Researchers at Radware realized the issue earlier this spring, when they figured out a way of stealing anything they wanted from Gmail users who integrate ChatGPT. Not only was their trick devilishly simple, but it left no trace on an end user's network — not even an iota of the suspicious Web traffic typical of data exfiltration attacks. As such, the user had no way of detecting the attack, let alone stopping it. ... To perform a ShadowLeak attack, attackers send an outwardly normal-looking email to their target. They surreptitiously embed code in the body of the message, in a format that the recipient will not notice — for example, in extremely tiny text, or white text on a white background. The code should be written in HTML, being standard for email and therefore less suspicious than other, more powerful languages would be. ... The malicious code can instruct the AI to communicate the contents of the victim's emails, or anything else the target has granted ChatGPT access to, to an attacker-controlled server. ... Organizations can try to compensate with their own security controls — for example, by vetting incoming emails with their own tools. However, Geenens points out, "You need something that is smarter than just the regular-expression engines and the state machines that we've built. Those will not work anymore, because there are an infinite number of permutations with which you can write an attack in natural language." 


UK: World’s first quantum computer built using standard silicon chips launched

This is reportedly the first quantum computer to be built using the standard complementary metal-oxide-semiconductor (CMOS) chip fabrication process which is the same transistor technology used in conventional computers. A key part of this approach is building cryoelectronics that connect qubits with control circuits that work at very low temperatures, making it possible to scale up quantum processors greatly. “This is quantum computing’s silicon moment,” James Palles‑Dimmock, Quantum Motion’s CEO, stated. ... In contrast to other quantum computing approaches, the startup used high-volume industrial 300 millimeter chipmaking processes from commercial foundries to produce qubits. The architecture, control stack, and manufacturing approach are all built to scale to host millions of qubits and pave the way for fault-tolerant, utility-scale, and commercially viable quantum computing. “With the delivery of this system, Quantum Motion is on track to bring commercially useful quantum computers to market this decade,” Hugo Saleh, Quantum Motion’s CEO and president, revealed. ... The system’s underlying QPU is built on a tile-based architecture, integrating all compute, readout, and control components into a dense, repeatable array. This design enables future expansion to millions of qubits per chip, with no changes to the system’s physical footprint.


Key strategies to reduce IT complexity

The cloud has multiplied the fragmentation of solutions within companies, expanding the number of environments, vendors, APIs, and integration approaches, which has raised the skill set, necessitated more complex governance, and prompted the emergence of cross-functional roles between IT and business. Cybersecurity also introduces further levels of complexity, introducing new platforms, monitoring tools, regulatory requirements, and risk management approaches that must be overseen by expert personnel. And then there’s shadow IT. With the ease of access to cloud technologies, it’s not uncommon for business units to independently activate services without involving IT, generating further risks. ... “Structured upskilling and reskilling programs are needed to prepare people to manage new technologies,” says Massara. “So is an organizational model capable of managing a growing number of projects, which can no longer be handled in a one-off manner. The approach to project management is changing because the project portfolio has expanded significantly, and a structured PMO is required, with project managers who often no longer reside solely in IT, but directly within the business.” ... While it’s true that an IT system with disparate systems leads to greater complexity, companies are still very cost-conscious and wary about heavily investing in unification right away. But as systems become obsolete, they become more harmonized.


Unshackling IT: Why Third-Party Support Is a Strategic Imperative, Especially for AI

One of the most compelling arguments for independent third-party support is its inherent vendor neutrality. When a company relies solely on a software vendor for support, that vendor naturally has a vested interest in promoting its latest upgrades, cloud migrations, and proprietary solutions. This can create a conflict of interest, potentially pushing customers towards expensive, unnecessary upgrades or discouraging them from exploring alternatives that might be a better fit for their unique needs. ... The recent acquisition of VMware by Broadcom provides a compelling and timely illustration of why third-party support is becoming increasingly critical. Following the merger, many VMware customers have expressed significant dissatisfaction with changes to licensing models, product roadmaps, and, crucially, support. Broadcom has been criticized for restructuring VMware’s offerings and reportedly reducing support for smaller customers, pushing them towards bundled, more expensive solutions. ... The shift towards third-party support isn’t just about cost savings; it’s about regaining control, accessing unbiased expertise, and ensuring business continuity in a rapidly changing technological landscape. For companies making critical decisions about AI integration and managing complex enterprise systems, providers like Spinnaker Support offer a strategic advantage.

Daily Tech Digest - July 02, 2024

The Changing Role of the Chief Data Officer

The chief data officer originally played more “defense” than “offense.” The position focused on data security, fraud protection, and Data Governance, and tended to attract people from a technical or legal background. CDOs now may take on a more offensive strategy, proactively finding ways to extract value from the data for the benefit of the wider business, and may come from an analytics or business background. Of course, in reality, the choice between offense and defense is a false one, as companies must do both. ... Major trends for CDOs in the future will include incorporating cutting-edge technology, such as generative AI, large language models, machine learning, and increasingly sophisticated forms of automation. The role is also spreading to a wider variety of industry sectors, such as healthcare, the private sector, and higher education. One of the major challenges is already in progress: responding to the COVID-19 pandemic. The pandemic hugely shook global supply chains, created new business markets, and also radically changed the nature of business itself. 


Duplicate Tech: A Bottom-Line Issue Worth Resolving

The patchwork nature of combined technologies can hinder processes and cause data fragmentation or loss. Moreover, differing cybersecurity capabilities among technologies can expose the organization to increased risk of cyberattacks, as older or less secure systems may be more vulnerable to breaches. Retaining multiple technologies may initially seem prudent in a merger or acquisition, but ultimately it proves detrimental. The drawbacks — from duplicated data and disconnected processes to inefficiencies and security vulnerabilities — far outweigh any perceived benefits, highlighting the critical need for streamlined, unified IT systems. ... There are compelling reasons to remove the dead weight of duplicate technologies and adopt a singular technology. The first step in eliminating tech redundancy is to evaluate existing technologies to determine which tools best align with current and future business needs. A collaborative approach with all relevant stakeholders is recommended to ensure the chosen solution supports organizational goals and avoids unnecessary repetition.


Disability community has long wrestled with 'helpful' technologies—lessons for everyone in dealing with AI

This disability community perspective can be invaluable in approaching new technologies that can assist both disabled and nondisabled people. You can't substitute pretending to be disabled for the experience of actually being disabled, but accessibility can benefit everyone. This is sometimes called the curb-cut effect after the ways that putting a ramp in a curb to help a wheelchair user access the sidewalk also benefits people with strollers, rolling suitcases and bicycles. ... Disability advocates have long battled this type of well-meaning but intrusive assistance—for example, by putting spikes on wheelchair handles to keep people from pushing a person in a wheelchair without being asked to or advocating for services that keep the disabled person in control. The disabled community instead offers a model of assistance as a collaborative effort. Applying this to AI can help to ensure that new AI tools support human autonomy rather than taking over. A key goal of my lab's work is to develop AI-powered assistive robotics that treat the user as an equal partner. We have shown that this model is not just valuable, but inevitable. 


What is the Role of Explainable AI (XAI) In Security?

XAI in cybersecurity is like a colleague who never stops working. While AI helps automatically detect and respond to rapidly evolving threats, XAI helps security professionals understand how these decisions are being made. “Explainable AI sheds light on the inner workings of AI models, making them transparent and trustworthy. Revealing the why behind the models’ predictions, XAI empowers the analysts to make informed decisions. It also enables fast adaptation by exposing insights that lead to quick fine-tuning or new strategies in the face of advanced threats. And most importantly, XAI facilitates collaboration between humans and AI, creating a context in which human intuition complements computational power.,” Kolcsár added. ... With XAI working behind the scenes, security teams can quickly discover the root cause of a security alert and initiate a more targeted response, minimizing the overall damage caused by an attack and limiting resource wastage. As transparency allows security professionals to understand how AI models adapt to rapidly evolving threats, they can also ensure that security measures are consistently effective. 


10 ways AI can make IT more productive

By infusing AI into business processes, enterprises can achieve levels of productivity, efficiency, consistency, and scale that were unimaginable a decade ago, says Jim Liddle, CIO at hybrid cloud storage provider Nasuni. He observes that mundane repetitive tasks, such as data entry and collection, can be easily handled 24/7 by intelligent AI algorithms. “Complex business decisions, such as fraud detection and price optimization, can now be made in real-time based on huge amounts of data,” Liddle states. “Workflows that spanned days or weeks can now be completed in hours or minutes.”  “Enterprises have long sought to drive efficiency and scale through automation, first with simple programmatic rules-based systems and later with more advanced algorithmic software,” Liddle says.  ... “By reducing boilerplating, teams can save time on repetitive tasks while automated and enhanced documentation keeps pace with code changes and project developments.” He notes that AI can also automatically create pull requests and integrate with project management software. Additionally, AI can generate suggestions to resolve bugs, propose new features, and improve code reviews.


How Tomorrow's Smart Cities Will Think For Themselves

When creating a cognitive city, the fundamental need is to move the computing power to where data is generated: where people live, work and travel. That applies whether you’re building a totally new smart city or retrofitting technology to a pre-existing ‘brownfield’ city. Either way, edge is key here. You’re dealing with information from sensors in rubbish bins, drains, and cameras in traffic lights. ... But in years to come the city itself will respond dynamically to the changing physical world, adjusting energy use in real-time to respond to the weather, for example. The evolution of monitoring has come from a machine-to-machine foundation, with the introduction of the Internet of Things (IoT) and now artificial intelligence (AI) becoming transformational in enabling smart technologies to become dynamic. Emerging AI technologies such as large language models will also play a role going forward, making it easy for both city planners and ordinary citizens to interact with the city they live in. Edge will be the key ingredient which gives us effective control of these cities of the future.


Serverless cloud technology fades away

The meaning of serverless computing became diluted over time. Originally coined to describe a model where developers could run code without provisioning or managing servers, it has since been applied to a wide range of services that do not fit its original definition. This led to a confusing loss of precision. It’s crucial to focus on the functional characteristics of serverless computing. The elements of serverless—agility, cost-efficiency, and the ability to rapidly deploy and scale applications—remain valuable. It’s important to concentrate on how these characteristics contribute to achieving business goals rather than becoming fixated on the specific technologies in use. Serverless technology will continue to fade into the background due to the rise of other cloud computing paradigms, such as edge computing and microclouds. ... The explosion of generative AI also contributed to the shifting landscape. Cloud providers are deeply invested in enabling AI-driven solutions, which often require specialized computer resources and significant data management capabilities, areas where traditional serverless models may not always excel.


Infrastructure-as-code and its game-changing impact on rapid solutions development

Automation is one of the main benefits of adopting an IaC approach. By automating infrastructure provisioning, IaC allows configuration to be accomplished at a faster pace. Automation also reduces the risk of errors that can result from manual coding, empowering greater consistency by standardizing the development and deployment of the infrastructure. ... Developers can rapidly assemble and deploy its infrastructure blocks, reusing them as needed throughout the development process. When adjustments are needed, developers can simply update the code the blocks are built on rather than making manual one-off changes to infrastructure components. Testing and tracking are more streamlined with IaC since the IaC code serves as a centralized and readily accessible source for documentation on the infrastructure. It also streamlines the testing process, allowing for automated unit testing of compliance, validation, and other processes before deploying. Additionally, IaC empowers developers to take advantage of the benefits provided by cloud computing. It facilitates direct interaction with the cloud’s exposed API, allowing developers to dynamically provision, manage, and orchestrate resources.


What is Multimodal AI? Here’s Everything You Need to Know

Multimodal AI describes artificial intelligence systems that can simultaneously process and interpret data from various sources such as text, images, audio, and video. Unlike traditional AI models that depend on a single type of data, multimodal AI provides a holistic approach to data processing. ... Although multimodal AI and generative AI share similarities, they differ fundamentally. For instance, generative AI focuses on creating new content from a single type of prompt, such as creating images from textual descriptions. In contrast, multimodal AI processes and understands different sensory inputs, allowing users to input various data types and receive multimodal outputs. ... Multimodal AI represents a significant advancement in the field of artificial intelligence. Therefore, by understanding and leveraging this advanced technology, data scientists and AI professionals can pave the way for more sophisticated, context-aware, and human-like AI systems, ultimately enriching our interaction with technology and the world around us. 


Excel Enthusiast to Supply Chain Innovator – The Journey to Building One of the Largest Analytic Platforms

While ChatGPT has helped raise awareness about AI capabilities, explaining how to integrate AI has presented challenges, especially when managing over 200 different data analytic reports. To address the different uses, Miranda has simplified AI into three categories: rule-based AI, learning AI (machine learning), and generative AI. Generative AI has emerged as the most dynamic tool among the three for executing and recording data analytics. Its versatility and adaptability make it particularly effective in capturing and processing diverse data sets, contributing to more comprehensive analytics outcomes. Miranda says, “People in analytics might not jump out of bed excited to tackle documentation, but it's a critical aspect of our work. Without proper documentation, we risk becoming a single point of failure, which is something we want to avoid.” ... These recordings are then converted into transcripts and securely stored in a containerized environment, streamlining the documentation process while ensuring data security. Because of process automation, Miranda says that the organization generated 240,000 work hours last year, and they anticipate even more this year.



Quote for the day:

"Life is like riding a bicycle. To keep your balance you must keep moving." -- Albert Einstein

Daily Tech Digest - August 30, 2023

Generative AI Faces an Existential IP Reckoning of Its Own Making

Clearly, this situation is untenable, with a raft of dire consequences already beginning to emerge. Should the courts determine that generative AI firms aren’t protected by the fair use doctrine, the still-budding industry could be on the hook for practically limitless damages. Meanwhile, platforms like Reddit are beginning to aggressively push back against unchecked data scraping. ... These sorts of unintended externalities will only continue to multiply unless strong measures are taken to protect copyright holders. Government can play an important role here by introducing new legislation to bring IP laws into the 21st century, replacing outdated regulatory frameworks created decades before anyone could have predicted the rise of generative AI. Government can also spur the creation of a centralized licensing body to work with national and international rights organizations to ensure that artists, content creators, and publishers are being fairly compensated for the use of their content by generative AI companies.


6 hidden dangers of low code

The low-code sales pitch is that computers and automation make humans smarter by providing a computational lever that multiplies our intelligence. Perhaps. But you might also notice that, as people grow to trust in machines, we sometimes stop thinking for ourselves. If the algorithm says it’s the right thing to do, we'll just go along with it. There are endless examples of the disaster that can ensue from such thoughtlessness. ... When humans write code, we naturally do the least amount of work required, which is surprisingly efficient. We're not cutting corners; we're just not implementing unnecessary features. Low code solutions don’t have that advantage. They are designed to be one-size-fits-all, which in computer code means libraries filled with endless if-then-else statements testing for every contingency in the network. Low code is naturally less efficient because it’s always testing and retesting itself. This ability to adjust automatically is the magic that the sales team is selling, after all. But it’s also going to be that much less efficient than hand-tuned code written by someone who knows the business.


Applying Reliability Engineering to the Manufacturing IT Environment

To understand exposure to failure, the Reliability Engineers analyzed common failure modes across manufacturing operations, utilizing the Failure Mode and Effects Analysis (FMEA) methodology to anticipate potential issues and failures. Examples of common failure modes include “database purger/archiving failures leading to performance impact” and “inadequate margin to tolerate typical hardware outages.” The Reliability Engineers also identified systems that were most likely to cause factory impact due to risk from these shared failure modes. This data helped inform a Resiliency Maturity Model (RMM), which scores each common failure mode on a scale from 1 to 5 based on a system’s resilience to that failure mode. This structured approach enabled us to not just fix isolated examples of applications that were causing the most problems, but to instead broaden our impact and develop a reliability mindset. 


5 Skills All Marketing Analytics and Data Science Pros Need Today

Marketing analysts should hone their skills to know who to talk to – and how to talk to them – to secure the information they have. Trust Insights’ Katie Robbert says it requires listening and asking questions to understand what they know that you need to take back to your team, audience, and stakeholders. “You can teach anyone technical skills. People can follow the standard operating procedure,” she says. “The skill set that is so hard to teach is communication and listening.” ... By improving your communication skills, you’ll be well-positioned to follow Hou’s advice: “Weave a clear story in terms of how marketing data could and should guide the organization’s marketing team.” She says you should tell a narrative that connects the dots, explains the how and where of a return on investment, and details actions possible not yet realized due to limited lines of sight. ... Securing organization-wide support requires leaning into what the data can do for the business. “Businesspeople want to see the business outcomes. 


Neural Networks vs. Deep Learning

Neural networks, while powerful in synthesizing AI algorithms, typically require less resources. In contrast, as deep learning platforms take time to get trained on complex data sets to be able to analyze them and provide rapid results, they typically take far longer to develop, set up and get to the point where they yield accurate results. ... Neural networks are trained on data as a way of learning and improving their conclusions over time. As with all AI deployments, the more data it’s trained on the better. Neural networks must be fine-tuned for accuracy over and over as part of the learning process to transform them into powerful artificial intelligence tools. Fortunately for many businesses, plenty of neural networks have been trained for years – far before the current craze inspired by ChatGPT – and are now powerful business tools. ... Deep learning systems make use of complex machine learning techniques and can be considered a subset of machine learning. But in keeping with the multi-layered architecture of deep learning, these machine learning instances can be of various types and various strategies throughout a single deep learning application.


Ready or not, IoT is transforming your world

At its core, IoT refers to the interconnection of everyday objects, devices, and systems through the internet, enabling them to collect, exchange, and analyze data. This connectivity empowers us to monitor and control various aspects of our lives remotely, from smart homes and wearable devices to industrial machinery and city infrastructure. The essence of IoT lies in the seamless communication between objects, humans, and applications, making our environments smarter, more efficient, and ultimately, more convenient. ... Looking ahead, the future of IoT holds remarkable potential. Over the next five years, we can expect a multitude of advancements that will reshape industries and lifestyles. Smart cities will continue to evolve, leveraging IoT to enhance sustainability, security, and quality of life. The healthcare sector will witness even more personalized and remote patient monitoring, revolutionizing the way medical care is delivered. AI and automation will play a pivotal role, in driving efficiency and innovation across various domains.


What are network assurance tools and why are they important?

Without a network assurance tool at their disposal, many enterprises would be forced to limit their network reach and capacity. "They would be unable to take advantage of the latest technological advancements and innovations because they didn’t have the manpower or tools to manage them," says Christian Gilby, senior product director, AI-driven enterprise, at Juniper Networks. "At the same time, enterprises would be left behind by their competitors because they would still be utilizing manual, trial-and-error procedures to uncover and repair service issues." The popularity of network assurance technology is also being driven by a growing enterprise demand for network teams to do more with less. "Efficiency is needed in order to manage the ever-expanding network landscape," adds Gilby. New devices and equipment are constantly brought online and added to networks. Yet enterprises don’t have unlimited IT budgets, meaning that staffing levels often remain the same, even as workloads increase.


How tomorrow’s ‘smart cities’ will think for themselves

In the smart cities of the future, technology will be built to respond to human needs. Sustainability is the biggest problem facing cities – and by far the biggest contributor is the automobile. Smart cities will enable the move towards reducing traffic, and towards autonomous vehicles directed efficiently through the streets. Deliveries which are not successful the first time are one example. These are a key driver of congestion, as drivers have to return to the same address repeatedly. In a cognitive city, location data that shows when a customer is home can be shared anonymously with delivery companies – with their consent – so that more deliveries arrive on the first attempt. Smart parking will be another important way to reduce congestion and make the streets more efficient. Edge computing nodes will sense empty parking spaces and direct cars there in real-time. They will also be a key enabler for autonomous driving, delivering more data points to autonomous systems in cars. 


Navigating Your Path to a Career in Cyber Security: Practical Steps and Insights

Practical experience is critical in the field of cyber security. Seek opportunities to apply your knowledge and gain hands-on experience as often as you can. I recommend looking for internships, part-time jobs, or volunteer positions that allow you to work on real-world projects and develop practical skills. I cannot stress how important it is to understand the fundamentals. ... Networking is essential for finding job opportunities in any field, including cybersecurity. You should attend industry events and conferences (there are plenty of free ones) and try to meet as many professionals already working in the field as possible. Their insights will go a long way in your journey to finding the right role. There are also many online communities and forums you can join where cyber security experts gather to discuss trends, share knowledge, and explore job opportunities. Networking will help you gain insights, discover job openings, and even receive recommendations from industry professionals.


NCSC warns over possible AI prompt injection attacks

Complex as this may seem, some early developers of LLM-products have already seen attempted prompt injection attacks against their applications, albeit generally these have been either rather silly or basically harmless. Research is continuing into prompt injection attacks, said the NCSC, but there are now concerns that the problem may be something that is simply inherent to LLMs. This said, some researchers are working on potential mitigations, and there are some things that can be done to make prompt injection a tougher proposition. Probably one of the most important steps developers can take is to ensure they are architecting the system and its data flows so that they are happy with the worst-case scenario of what the LLM-powered app is allowed to do. “The emergence of LLMs is undoubtedly a very exciting time in technology. This new idea has landed – almost completely unexpectedly – and a lot of people and organisations (including the NCSC) want to explore and benefit from it,” wrote the NCSC team.



Quote for the day:

"When you practice leadership, the evidence of quality of your leadership, is known from the type of leaders that emerge out of your leadership" -- Sujit Lalwani

Daily Tech Digest - December 12, 2022

14 lessons CISOs learned in 2022

Ransomware attacks have increased in 2022, with companies and government entities among the most prominent targets. Nvidia, Toyota, SpiceJet, Optus, Medibank, the city of Palermo, Italy, and government agencies in Costa Rica, Argentina, and the Dominican Republic were among the victims in 2022, a year in which the lines between financially and politically motivated ransomware groups continued to be blurred. A critical piece of any organization's defense strategy should be employee awareness and training because "employees continue to be targeted in threat actor strategies through phishing and other social engineering means," says Gary Brickhouse, CISO at GuidePoint Security. ... Organizations should also do more to keep up with vulnerabilities in both open- and closed-source software. However, this is no easy task since thousands of bugs surface yearly. Vulnerability management tools can help identify and prioritize vulnerabilities found in operating systems applications.


Grow your own CIO: Building leadership and succession plans

To ensure the long-term health of the company, tech chiefs must focus on building up that middle tier of IT leaders, a reality many CIOs are only now recognizing the need to address. “There are not enough people out there — you have to develop your own people,’’ says Roberts, who estimates that only 10% to 20% of companies are “being intentional about doing formal development programs.’’ Mike Eichenwald, a senior client partner at Korn Ferry Consulting, agrees that it’s important to elevate individuals from vertical leadership roles within the pillars of infrastructure, engineering, product, and security to enterprise leadership roles. With technology converging in all aspects of the business, doing so will help organizations leverage the diversity of experience those midlevel managers have under their belts, and their learning curve and degree of risk will be minimized, Eichenwald says. “Unfortunately, organizations miss an opportunity to cultivate that talent internally and often find themselves needing to reach out to the [external] market to bring it in,’’ he adds.


Open source security fought back in 2022

Anyone paying attention to open source for the past 20 years—or even the past two—will not be surprised to see commercial interests start to flourish around these popular open source technologies. As has become standard, that commercial success is usually spelled c-l-o-u-d. Here's one prominent example: On December 8, 2022, Chainguard, the company whose founders cocreated Sigstore while at Google, released Chainguard Enforce Signing, which enables customers to use Sigstore-as-a-service to generate digital signatures for software artifacts inside their own organization using their individual identities and one-time-use keys. This new capability helps organizations ensure the integrity of container images, code commits, and other artifacts with private signatures that can be validated at any point an artifact needs to be verified. It also allows a dividing line where open source software artifacts are signed in the open in a public transparency log; however, enterprises can sign their own software with the same flow, but with private versions that aren’t in the public log. 


Turning the vision of a utopic smart city into reality

It’s critical to consider what success looks like, and this can be measured by how user-friendly and efficient a service is, as well as cost efficiencies. For instance, reducing the time to find a parking space in a new city from an hour to just a few minutes when using parking apps which can indicate spaces and process payment. It’s almost impossible to consider smart cities without thinking about the efficient energy management benefits of smart buildings. Sustainable initiatives such as integrated workplace management systems already have the capability to monitor over 50,000 data points per second, analyse data, and send it to mobile apps. This could see millions of users saving energy. With a long-term vision for smart city platforms to become unified or standardised, one solution can potentially work seamlessly anywhere in the world. Platforms could integrate city infrastructure and navigation, and access to emergency and city services. Transformation will be driven by users empowered with the right data, perhaps even according to their user type of student, tourist, or city resident.


Can real-time data visualisation deliver trust and opportunity?

What is interesting is that so much of this is driven through an ecosystem of partners. No one organisation can deliver the breadth and depth of data and tools needed to make such projects work and there is much to learn from that. Collaborations and partnerships can elevate and enhance real-time data visualisation and value. For many organisations however, real-time data is still virgin territory and real-time visualisation is one of those technologies where reality cannot hope to match expectation, at least according to Jaco Vermeulen, CTO of tech consultancy BML Digital. “Almost every customer says they want real-time visualisation, but then nine out of 10 can’t qualify why they need it, especially when it comes to what decisions or actions it will enable,” says Vermeulen. “This is usually because they start from the belief that the data is always available and therefore should be immediately understandable and yield profound insight. The truth is a bit more challenging.” ... “It is the real-time decisions that create impact,” he says. “Optimising supply chains, reducing waste and pollution, optimising operations, and informing and satisfying consumers. 


IBM’s Krishnan Talks Finding the Right Balance for AI Governance

The challenge comes essentially from not knowing how the sausage was made. One client, for instance, had built 700 models but had no idea how they were constructed or what stages the models were in, Krishnan said. “They had no automated way to even see what was going on.” The models had been built with each engineer’s tool of choice with no way to know further details. As result, the client could not make decisions fast enough, Krishnan said, or move the models into production. She said it is important to think about explainability and transparency for the entire life cycle rather than fall into the tendency to focus on models already in production. Krishnan suggested that organizations should ask whether the right data is being used even before something gets built. They should also ask if they have the right kind of model and if there is bias in the models. Further, she said automation needs to scale as more data and models come in. The second trend Krishan cited was the increased responsible use of AI to manage risk and reputation to instill and maintain confidence in the organization. 


13 tech predictions for 2023

“Different edges are implemented for different purposes. Edge servers and gateways may aggregate multiple servers and devices in a distributed location, such as a manufacturing plant. An end-user premises edge might look more like a traditional remote/branch office (ROBO) configuration, often consisting of a rack of blade servers. Telecommunications providers have their own architectures that break down into a provider far edge, a provider access edge, and a provider aggregation edge. ... As we enter 2023, CIOs have earned a seat among the decision-makers and are now at the helm of company-wide technology decision-making. Amid a volatile economic climate, IT leaders must prioritize reducing costs, but they are finding themselves pulled between contrasting concerns of managing spend, dealing with security risks, and fostering innovation. As they navigate an uncertain market, CIOs will need to analyze company usage, along with their previous experience, to rethink business approaches and make decisions. The goal is to identify ways to reduce spend across the company, but not at the expense of key areas like cybersecurity and innovation. 


Preventing a ransomware attack with intelligence: Strategies for CISOs

One of the most effective ways to stop a ransomware attack is to deny them access in the first place; without access, there is no attack. The adversary only needs one route of access, and yet the defender has to be aware and prevent all entry points into a network. Various types of intelligence can illuminate risk across the pre-attack chain—and help organizations monitor and defend their attack surfaces before they’re targeted by attackers. The best vulnerability intelligence should be robust and actionable. For instance, with vulnerability intelligence that includes exploit availability, attack type, impact, disclosure patterns, and other characteristics, vulnerability management teams predict the likelihood that a vulnerability could be used in a ransomware attack. With this information in hand, vulnerability management teams, who are often under-resourced, can prioritize patching and preemptively defend against vulnerabilities that could lead to a ransomware attack. Having a deep and active understanding of the illicit online communities where ransomware groups operate can also help inform methodology, and prevent compromise.


What to do when your devops team is downsized

If you lead teams or manage people, your first thought must be how they feel or how they are personally impacted by the layoffs. Some will be angry if they’ve seen friends and confidants let go; others may be fearful they’re next. Even when leadership does a reasonable job at communication (which is all too often not the case), chances are your teams and colleagues will have unanswered questions. Your first task after layoffs are announced is to open a dialogue, ask people how they feel, and dial up your active listening skills. Other steps to help teammates feel safe include building empathy for personal situations, energizing everyone around a mission, and thanking team members for the smallest wins. Use your listening skills to identify the people who have greater concerns and fears or who may be flight risks. You’ll want to talk to them individually and find ways to help them through their anxieties or recognize when they need professional help. You should also give people and teams time to reflect and adjust. Asking everyone to get back to their sprint commitments and IT tickets is insensitive and unrealistic, especially if the company laid off many people.


Our ChatGPT Interview Shows AI Future in Banking Is Scary-Good

ChatGPT is a large, advanced language processing model that is trained using a technique called generative pre-trained transformer, or GPT. This allows ChatGPT to generate human-like responses to questions and statements in a conversation, making it a powerful tool for a wide range of applications. Compared to traditional chatbots, which are often limited in their ability to understand and generate natural language, ChatGPT has the advantage of being able to provide more accurate and detailed responses. Additionally, because it is trained using a large amount of data, ChatGPT is able to learn and adapt to different conversational styles and contexts, making it more versatile and capable of handling a wider range of scenarios. ... The banking industry can use ChatGPT technology in a number of ways to improve their operations and provide better service to their customers. For example, ChatGPT can be used to automate customer service tasks, such as answering frequently asked questions or providing detailed information about products and services. This can free up customer service representatives to focus on more complex or high-value tasks, improving overall efficiency and customer satisfaction.



Quote for the day:

"Strong leaders encourage you to do things for your own benefit, not just theirs." -- Tim Tebow

Daily Tech Digest - October 14, 2022

Which cybersecurity metrics matter most to CISOs today?

Given the rapid increase in malware-free attacks, there’s a tendency on the part of cybersecurity teams to add more metrics. Seeing more reported data as a panacea for rising risks that aren’t immediately understood, cybersecurity teams will turn on as many metrics as possible, looking for clues. Relying on antivirus, SIEM (security information and event management), security ticketing systems, vulnerability scanners, and more, CISOs’ teams generate an overwhelming number of metrics that lack context. CISOs warn that presenting metrics straight from tools without a narrative supporting them is a mistake. C-level executives and the boards they report to are more focused on new insights that are contextually relevant than a series of tactical measures. Every new high-profile intrusion or breach drives up to a dozen or more internal user requests for new metrics. Managing user requests by how much value they provide to contextual intelligence and delivering business value is critical. CISOs tell VentureBeat it’s easy to say no to additional metrics requests when there is no connection to requested metrics that quantify the value cybersecurity delivers.


Making everything connect for smart cities

It’s a vision of how smart cities can be holistically planned by connecting the different city domains and addressing Sustainable Development Goals (SDGs) globally. In this way, mobility, energy, the environment, health, education, security and the economy are not treated separately, but rather as a whole consistent continuity of human-centric services. Smart cities need to be much better at creating an open platform of dialogue that is accessible to all citizens. ... These allow residents to engage with a wide array of data, as well as completing personal tasks like paying bills, finding efficient transportation and assessing energy consumption in the home. Smart cities also need to account for social infrastructure that provides a cultural fabric, making the city attractive to residents and offering a sense of local identity. It is often the social and cultural aspects of a city that citizens find makes it most attractive to live in – aspects such as green open spaces, a wide choice of retail outlets, and bustling nightlife. This is particularly important for cities that are being created ‘from scratch’ (rather than already existing) and need to find effective ways to attract residents.


Dell gets more edge-specific with Project Frontier platform

Dell also said it is expanding its current edge portfolio in the following ways: Edge analytics and operations - Manufacturers can optimize how they deploy edge applications with an Dell Validated Design for Manufacturing Edge, the company said. This now includes new Dell-validated partner applications to support advanced edge use cases, and improve factory processes and efficiencies, while reducing waste and raw materials usage for more sustainable operations. Manufacturers can respond quickly to changes in demand, and enable reconfigurable production lines with Dell's private 5G capability, Dell said. Edge computing and analytics - The PowerEdge XR4000 is the smallest server in the Dell lineup at about the size of a shoebox. The XR4000 is 60% shorter than conventional data center servers, and its multiple mounting options allow it to be installed in a rack, on walls or ceilings, saving valuable floor space. The multi-node, 2U chassis server can survive unpredictable conditions, such as heat waves or falls, the company said.


The White House can build on its AI Bill of Rights blueprint today

Several current uses of AI clearly violate the blueprint and should no longer be used. The president should also stop encouraging agencies to spend American Rescue Plan funds on ShotSpotter and other “gunshot detection” technologies, which change police behavior but have not been shown to decrease gun violence. These tools are in violation of the blueprint’s principles that AI tools must be safe, effective, nondiscriminatory, and transparent. ... On the legislative front, the AI Bill of Rights principles are embodied in both the American Data Privacy Protection Act and the Algorithmic Accountability Act of 2022, both of which the administration could put its support behind. There has been substantial investment in the development and adoption of AI, but nowhere near as much money or energy put toward safeguards or protection. We should not repeat the same self-regulatory mistakes made with social media and online advertising that left us in the privacy crisis we are in today. 


How intelligent automation changes CI/CD

Intelligent automation addresses many of the core requirements for successful software delivery. Basic process automation can increase devops productivity by automating routine manual tasks through code. For example, a developer can run a build in Jenkins that then triggers an automated task that pushes the build to Artifactory and kicks off a delivery pipeline. However, combining automation with AI-powered intelligence can turbocharge processes and improve business outcomes. Intelligent automation can automate routine tasks and then constantly improve automated decision making as the release moves through the delivery lifecycle. Intelligence applied to the release process — when combined with deep tools integrations that provide access not only to events but also to all process data — can automate the detection of software risks and automatically flag release candidates for remediation before they make it to production. In addition to increased devops productivity and faster and more accurate software releases, intelligent automation provides the means to implement centralized, automated control over compliance and security. 


A Big Threat for SMBs: Why Cybersecurity is Everyone’s Responsibility

It impacts everyone across every department and every element of operations. Cybersecurity is a collective responsibility. During this Cybersecurity Awareness Month, let’s debunk the pervasive misconception that cybersecurity is strictly an IT issue. To avoid becoming a statistic, SMBs need to develop a security culture that reinforces the idea that cybersecurity is the responsibility of every team member. From the founder who sets a security-focused tone to the specific teams that implement the policies, to the HR department responsible for onboarding new employees, to the IT team setting system password requirements, and to every employee that can potentially open a phishing email triggering a security incident, it’s a collective effort to stay aware. All individuals need to be trained, vigilant, and engaged. The devil is in the details, as it’s the tools, tasks, and routine activities each team member performs that will protect the company.


Seeing electron movement at fastest speed ever could help unlock next-level quantum computing

Seeing electrons move in increments of one quintillionth of a second could help push processing speeds up to a billion times faster than what is currently possible. In addition, the research offers a “game-changing” tool for the study of many-body physics. “Your current computer’s processor operates in gigahertz, that’s one billionth of a second per operation,” said Mackillo Kira, U-M professor of electrical engineering and computer science, who led the theoretical aspects of the study published in Nature. “In quantum computing, that’s extremely slow because electrons within a computer chip collide trillions of times a second and each collision terminates the quantum computing cycle. ... To see electron movement within two-dimensional quantum materials, researchers typically use short bursts of focused extreme ultraviolet (XUV) light. Those bursts can reveal the activity of electrons attached to an atom’s nucleus. But the large amounts of energy carried in those bursts prevent clear observation of the electrons that travel through semiconductors—as in current computers and in materials under exploration for quantum computers.


New data protection bill must enable a progressive data governance framework

The robust framework that vowed to safeguard the privacy of an individual’s data would have made the privacy design of the bill even more redundant. Consent and notice framework in the new Bill should be dealt with in such a way that it addresses the right to informational privacy while avoiding consent fatigue for consumers. For instance, individuals may receive innumerable privacy notifications causing consent fatigue; this issue was considered and acknowledged by the Justice Srikrishna committee report. Besides, from a business perspective, the cost of compliance, especially for small businesses, will be huge and may result in additional costs. The new personal data governance framework should focus on simplifying the consent and notice framework in such a manner that individuals can easily understand how and for what purpose is their personal data being processed. Besides, the new Bill must lay out better means and ways to obtain consent, which is inclusive, less tiresome, and efficient.


Emotional intelligence: How to create psychological safety for your IT team

The best leaders understand the complexities and imperfections of being human and are not afraid to present their true selves in the workplace. These leaders emanate compassion and encourage their team members to embrace and express their unique gifts and talents. Compassion cuts through mental constructs and perceptions. It begins when leaders examine and undo traditional rules, roles, and narratives that limit their thinking, decision-making, and worldview. Freedom from outdated narratives enables release, self-acceptance, and permission to bring one’s whole self to the workplace. Leaders who are driven by the needs of the ego struggle to let go of outdated competence, values, and skills. Marshall Goldsmith, one of the world’s foremost thought leaders on executive coaching, explains this perfectly in the title of his book, What Got You Here Won’t Get You There. The compulsive need to be right becomes more important than discovering new horizons, untapped potential, and possibilities. Self-righteousness creates a division between the self and the team, eroding trust.


Smart buildings may be your cybersecurity downfall

With the rise of IoT, a wave of adoption of IT and IoT solutions at all levels of building system architecture poses a serious cyber security issue. As it becomes increasingly difficult to distinguish between building automation systems and other systems used in companies and their infrastructures, more “cyber holes” tend to be left unmonitored. The use of insecure industrial protocols is another vulnerability that attackers take advantage of to disrupt smart buildings operations. This is especially the case for building automation systems. Popular protocols like BACnet and LonWorks are not implicitly secure and, like those used in the industrial production sector, tend to have their own vulnerabilities. ... As the cyber-physical equipment within buildings becomes increasingly distributed, especially due to the new trend of supervising building complexes from a central location, cyberattacks on smart buildings, as well as attacks on cities and other smart city infrastructures, can have a significant security impact for users.



Quote for the day:

"Personal leadership is the process of keeping your vision and values before you and aligning your life to be congruent with them." -- Stephen R. Covey

Daily Tech Digest - September 18, 2022

5 ways to secure devops

Devops workflows are designed for speed and rapidly iterating with the latest requirements and performance improvements. Gate reviews are static. The tools devops teams rely on for security testing can lead to roadblocks, given their gate-driven design. Devops is a continuous process in high-performance IT teams, while stage gates slow the pace of development. Devops leaders often don’t have the time to train their developers to integrate security from the initial phases of a project. The challenge is how few developers are trained on secure coding techniques. Forrester’s latest report on improving code security from devops teams looked at the top 50 undergraduate computer science programs in the US, as ranked by US News and World Report for 2022, and found that none require secure coding or a secure application design class. CIOs and their teams are stretched thin with the many digital transformation initiatives, support for virtual teams and ongoing infrastructure support projects they have going on concurrently. CIOs and CISOs also face the challenges of keeping their organizations in regulatory compliance with more complex audit and reporting requirements. 


Designing APIs for humans: Error messages

The status code of the response should already tell you if an error happened or not, the message needs to elaborate so you can actually fix the problem. It might be tempting to have deliberately obtuse messages as a way of obscuring any details of your inner systems from the end user; however, remember who your audience is. APIs are for developers and they will want to know exactly what went wrong. It’s up to these developers to display an error message, if any, to the end user. Getting an “An error occurred” message can be acceptable if you’re the end user yourself since you’re not the one expected to debug the problem (although it’s still frustrating). As a developer there’s nothing more frustrating than something breaking and the API not having the common decency to tell you what broke. ... Letting you know what the error was is the bare minimum, but what a developer really wants to know is how to fix it. A “helpful” API wants to work with the developer by removing any barriers or obstacles to solving the problem. The message “Customer not found” gives us some clues as to what went wrong, but as API designers we know that we could be giving so much more information here.


Arm Neoverse roadmap targets enterprise infrastructure, cloud

"Compute workloads are on a relentless march higher, and becoming more complex," said Chris Bergey, senior vice president and general manager of Arm's infrastructure line of business, at a press briefing. "Machine learning and AI are taking over the future, and so infrastructure will look nothing like the past." Over the next year, Arm will work closely with its cloud and software partners to optimize cloud-native software infrastructure, frameworks and workloads. These partnerships include contributions to projects including Kubernetes and Istio, along with several CI/CD tools used for creating cloud-native software for the Arm architecture. Arm will also work to improve machine learning frameworks such as TensorFlow and a number of workloads such as big data, analytics and media processing. The company is moving into more traditional enterprise spaces now, Bergey said, noting the work it has done with VMware on its Project Monterey and providing support for Red Hat's OpenShift and SAP's HANA. "These cloud providers all use GPUs to underpin their cloud workloads, and the majority of them are using Arm," Bergey said.


How quantum physicists are looking for life on exoplanets

So, some of the biggest things in the universe are certainly quantum mechanical, including supermassive blackholes which can lose energy through a quantum phenomenon known as Hawking radiation. The second point is one often thinks quantum deals with very low temperatures. Again, to take our sun as an example—it's very hot, but that's quantum mechanical. Low temperature doesn't serve as a requirement for quantum. This example of a star and the quantumness of the fusion process and the high temperatures associated with that—I just want to broaden the view of what quantum mechanics is and how ubiquitous it is. ... It's quite amazing that we can determine what is in these planets' atmospheres—planets that would be impossible for humans to ever visit. That, and we can look for signatures of life, like, are there molecules that we associate with life floating around in these planets, at least if it's Earth-like life; then we might be able to determine with some probability that some planet way out there that no human could ever visit, harbors life. Or maybe we could discover other candidate forms of life.


How Is Platform Engineering Different from DevOps and SRE?

Over time, thought leaders came up with different metrics for organizations to gauge the success of their DevOps setup. The DevOps bible, “Accelerate,” established lead time, deployment frequency, change failure rate and mean time to recovery (MTTR) as standard metrics. Reports like the State of DevOps from Puppet and Humanitec’s DevOps benchmarking study used these metrics to compare top-performing organizations to low-performing organizations and deduce which practices contribute most to their degree of success. DevOps unlocked new levels of productivity and efficiency for some software engineering teams. But for many organizations, DevOps adoption fell short of their lofty expectations. Manuel Pais and Matthew Skelton documented these anti-patterns in their book “DevOps Topologies.” In one scenario, an organization tries to implement true DevOps and removes dedicated operations roles. Developers are now responsible for infrastructure, managing environments, monitoring, etc., in addition to their previous workload. Often senior developers bear the brunt of this shift, either by doing the work themselves or by assisting their junior colleagues.


The Cyber Security Head Game

Just as the predators of the fish below are never going to go away (which is why this fish camoflages itself and sports huge fake eyes to scare predators), cyber predators also will never go away. And the best of these cyber predators will continue to penetrate even the strongest defenses, because the exponential increase in IT system complexity, which makes it increasingly difficult to even understand the full extent of what you're defending, favors cyber attackers over cyber defenders. So we need to assume that some hackers will inevitably get inside our networks and thus we must adopt strategies of deception, similar to those employed successfully by our fish here, to lessen the harm from competent hackers, who manage to get up close and personal. We also need to create doubt in hackers’ minds, about the benefits of attacking us in the first place, in the same way that the poisonous Cane toad avoids attacks from predators who know the toad’s skin has lethal poison glands, and milk snakes, who have no poison, but discourage would-be predators by mimicking the coloration of coral snakes, who definitely do have deadly venom.


US Cyber-Defense Agency Urges Companies to Automate Threat Testing

Automated threat testing is still not very widespread, according to the official, who added that organizations sometimes don’t really follow through after deploying expensive tools on their network and instead just assume they’re doing the job. Automating security controls will make it easier to stop attackers from relying on established tactics. The top threat actors are still going back and leveraging vulnerabilities that are up to 10 years and older, warned the CISA official. CISA is making the recommendation in collaboration with the Center for Threat-Informed Defense, a 29-member nonprofit formed in 2019 that draws on MITRE’s framework. Iman Ghanizada, global head of autonomic security operations at Google Cloud, a research sponsor of the Center, said automated testing is important for creating continuous feedback loops that can steadily improve protection. “Whether you are a large company or a startup, you have to have visibility, analytics, response and continuous feedback,” he said.


Smart Cities: Mobility ecosystems for a more sustainable future

Although every city is different, leading cities are becoming smarter through their participation in large, complex, digitally enabled ecosystems. The question for many urban leaders, however, is how to engage with them effectively. Our experience in working with large transportation and communications clients yields a multilayered model and approach to guide the design and management of urban mobility systems. Given the interconnected nature of the building blocks of mobility, each layer—demand, supply, and foundational—is critical. Cities must understand and manage all the interactions and interdependencies. For example, demand for different forms of transportation is enabled via available modes of transit and supporting infrastructure. None of these would be possible without regulations, financing, insurance, and innovation. ... To achieve its vision of becoming a 45-minute city, Singapore is focusing on building its infrastructure (e.g., it is building intermodal mobility hubs to allow commuters to move seamlessly from one mode of transportation to another). The city is developing a robust innovation ecosystem, collaborating with many private-sector players. 


How to Draw and Retain Top Talent in Cyber Security

Before you introduce policies to increase diversity, you need to know who is currently applying. Gather data on applicants to establish if you need to take proactive steps to attract specific groups – you can’t make rational business decisions without data. Analyze job descriptions to eliminate bias so you aren’t deterring anyone. Review the language -- are you unconsciously drafting job advertisements and application forms with a white male in mind? Consider a post-application survey so you can establish what is appealing to recruits and what might cause them to drop out. You’ll be surprised how many people want to share their feedback because a negative job application process can deter an applicant for good, and you could be missing out on the best talent through ignorance. We implemented an Applicant Tracking System to understand the sources our candidates are coming from, see how diverse the candidate pool is (or not), and improve the candidate experience by being able to track how their process progresses and ends. ... Once you’ve got these cyber professionals on board, you need to keep them. 


Why shift left is burdening your dev teams

Security and compliance challenges are a significant barrier to most organizations’ innovation strategies, according to CloudBees. The survey also reveals agreement among C-suite executives that a shift left security strategy is a burden on dev teams. 76% of C-suite executives say that compliance challenges and security challenges (75%) limit their company’s ability to innovate. This is due, in part, to the significant time spent on compliance audits, risks, and defects. At the same time, C-suite executives overwhelmingly favor a shift left approach, a strategy of moving software testing and evaluation to earlier in the development lifecycle, placing the burden of compliance on development teams. In fact, 83% of C-suite executives say the approach is important for them as an organization, and 77% say they are currently implementing a shift left security and compliance approach. This is despite 58% of C-suite executives reporting that shift left is a burden on their developers. “These survey findings underscore the urgent need to transform the software security and compliance landscape. 



Quote for the day:

"Courage is the ability to execute tasks and assignments without fear or intimidation." -- Jaachynma N.E. Agu