Daily Tech Digest - August 08, 2021

The Role of Artificial Consciousness in AI Systems

What this means is that AI programs having common sense may not be enough to deal with un-encountered situations because it’s difficult to know the limits of common sense knowledge. It may be that artificial consciousness is the only way to ascribe meaning to the machine. Of course, artificial consciousness will be different to the human variant. Philosophers like Descartes, Daniel Dennett, and the physicist Roger Penrose and many others have given different theories of consciousness about how the brain produces thinking from neural activity. Neuroscience tools like fMRI scanners might lead to a better understanding of how this happens and enable a move to the next level of humanizing AI. But that would involve confronting what the Australian philosopher, David Chalmers, calls the hard problem of consciousness – how can subjectivity emerge from matter? Put another way, how can subjective experiences emerge from neuron activity in the brain? Furthermore, our understanding of human consciousness can only be understood through our own inner experience – the first-person perspective. 


Creating a Quality Strategy

Some teams might prefer to do ad-hoc exploratory testing with minimal documentation. Other teams might have elaborate test case management systems that document all the tests for the product. And there are many other options in between. Whatever you choose should be right for your team and right for your product. ... On some teams, the developers write the unit tests, and the testers write the API and UI tests. On other teams, the developers write the unit and API tests, and the testers create the UI tests. Even better is to have both the developers and the testers share the responsibility for creating and maintaining the API and UI tests. In this way, the developers can contribute their code management expertise, while the testers contribute their expertise in knowing what should be tested. ... Some larger companies may have dedicated security and performance engineers who take care of this testing. Small startups might have only one development team that needs to be in charge of everything.


It's time to improve Linux's security

Believe it or not, many vendors, especially in the Internet of Things (IoT), choose not to fix anything. Sure, they could do it. Several years ago, Linus Torvalds, Linux's creator, pointed out that "in theory, open-source [IoT devices] can be patched. In practice, vendors get in the way." Cook remarked, with malware here, botnets there, and state attackers everywhere, vendors certainly should protect their devices, but, all too often, they don't. "Unfortunately, this is the very common stance of vendors who see their devices as just a physical product instead of a hybrid product/service that must be regularly updated." Linux distributors, however, aren't as neglectful. They tend to "'cherry-pick only the 'important' fixes. But what constitutes 'important' or even relevant? Just determining whether to implement a fix takes developer time." It hasn't helped any that Linus Torvalds has sometimes made light of security issues. For example, in 2017, Torvalds dismissed some security developers' [as] "f-cking morons." He didn't mean to put all security developers in the same basket, but his colorful language set the tone for too many Linux developers.


Creating a Secure REST API in Node.js

As an open-source, Node.js is sponsored by Joyent, a cloud computing and Node.js best development provider. The firm financed several other technologies, like the Ruby on Rails framework, and implemented hosting duties to Twitter and LinkedIn. LinkedIn also became one of the first companies to use Node.js to create a new project for its mobile application backend. The technology was next selected by many technology administrators, like Uber, eBay, and Netflix. Though, it wasn’t until later that wide appropriation of server-side JavaScript with Node.js server began. The investment in this technology crested in 2017, and it is still trending on the top. Node.js IDEs, the most popular code editor, has assistance and plugins for JavaScript and Node.js, so it simply means how you customize IDE according to the coding requirements. But, many Node.js developers praise specific tools from VS Code, Brackets, and WebStorm. Exercising middleware over simple Node.js best development is a general method that makes developers’ lives more comfortable. 


In a world first, South Africa grants patent to an artificial intelligence system

At first glance, a recently granted South African patent relating to a “food container based on fractal geometry” seems fairly mundane. The innovation in question involves interlocking food containers that are easy for robots to grasp and stack. On closer inspection, the patent is anything but mundane. That’s because the inventor is not a human being – it is an artificial intelligence (AI) system called DABUS. ... The granting of the DABUS patent in South Africa has received widespread backlash from intellectual property experts. The critics argued that it was the incorrect decision in law, as AI lacks the necessary legal standing to qualify as an inventor. Many have argued that the grant was simply an oversight on the part of the commission, which has been known in the past to be less than reliable. Many also saw this as an indictment of South Africa’s patent procedures, which currently only consist of a formal examination step. This requires a check box sort of evaluation: ensuring that all the relevant forms have been submitted and are duly completed.


Ford's new BlueCruise hands-off driving feature is a solid first effort

It keeps the vehicle in the center of the lane, but with a little too much urgency. It's not a safety issue, but to a driver unfamiliar with what's going on, the steering movements are a little too frequent and a little too jerky. I can tell that the computer is working really hard to keep the car centered at all times — I compared it a 16-year old driver who was still learning the ropes and wasn't quite confident in their abilities, making frequent, jerky input adjustments as they drive along rather than smoother, more practiced inputs that an experienced driver would make. It isn't necessary to always be centered exactly in the lane, after all — an experienced driver knows that drifting a few inches to the left or right is normal. I said to the Ford engineers that most people probably wouldn't notice the tiny steering inputs, but they might lose confidence in the system because of it, even if they couldn't quite put their finger on why. Future releases will improve on it, I'm sure. BlueCruise also isn't (yet) aware of anything going on to the side or behind the vehicle.


Critical Cobalt Strike bug leaves botnet servers vulnerable to takedown

Cobalt Strike is a legitimate security tool used by penetration testers to emulate malicious activity in a network. Over the past few years, malicious hackers—working on behalf of a nation-state or in search of profit—have increasingly embraced the software. For both defender and attacker, Cobalt Strike provides a soup-to-nuts collection of software packages that allow infected computers and attacker servers to interact in highly customizable ways. The main components of the security tool are the Cobalt Strike client—also known as a Beacon—and the Cobalt Strike team server, which sends commands to infected computers and receives the data they exfiltrate. An attacker starts by spinning up a machine running Team Server that has been configured to use specific “malleability” customizations, such as how often the client is to report to the server or specific data to periodically send. Then the attacker installs the client on a targeted machine after exploiting a vulnerability, tricking the user or gaining access by other means.


Test Debt Fundamentals: What, Why & Warning Signs

Test Debt is hard to measure factually, but we can rely on our human capacity to detect, feel and react to warning signs. For test automation, we can sense organizational behaviors and specific test automation attributes. Let’s get back to the Why of our automated tests. One objective of our test automation effort is to accelerate the delivery of software changes with confidence. The test automation value disappears when the team starts to bypass the test automation campaign, search for alternative routes, ask for exceptions. Various reasons are possible as a long execution time, instability, lack of understanding, or other maintainability criteria. The execution time is directly tied to essential indicators of software delivery: lead-time for changes, cycle-time, and MTTA. These metrics are all part of the Accelerate report, correlating the organization’s performance with these measures. We need to constraint our test execution time to limit its impact on these acceleration metrics. For test automation, it means less but more valuable tests executed faster. 


Systems of systems: The next big step for edge AI

SoS will allow autonomous or semi-autonomous systems to control and respond to data flows. In the defense sector, for example, it will connect the data dots gathered from weather analysis, radars, and video surveillance to provide either the quickest path for a missile, or the best way to intercept it. Separately, a train technology provider that delivers transportation as a service need to unify the subsystems in a train and in a train station, expediting failure flagging and repairs to reduce costly service delays. In each case, a system of systems will inform or replace human decision-making, leading to faster, smarter, and more precise insights. ... It’s no stretch to say that edge AI-powered systems of systems will change society as we know it. Like bees working together to build and maintain a hive, algorithms in a SoS will form a swarm. Cars that can communicate with each other will be collectively smarter and safer than any individual car. Inside one vehicle, a SoS will coordinate navigation and telematics while independently gathering live weather and traffic data from roads.


Mainframes: The Missing Link To AI (Artificial Intelligence)?

The power of AI for mainframes does not have to be about creating projects. For example, there are emerging AIOps tools that help automate the systems. Some of the benefits include improved performance and availability, increased support speed for application releases and the DevOps process, and the proactive identification of issues. Such benefits can be essential since it is increasingly more difficult to attract qualified IT professionals. According to a recent survey from Forrester and BMC, about 81% of the respondents indicated that they rely partially on manual processes when dealing with slowdowns and 75% said they use manual labor for diagnosing multisystem incidents. In other words, there is much room for improvement—and AI can be a major driver for this. “Mainframe decision makers are becoming more aware than ever that the traditional way of handling mainframe operations will soon fall by the wayside,” said John McKenny, who is the Senior Vice President and General Manager of Intelligent Z Optimization and Transformation at BMC. 



Quote for the day:

"Ninety percent of leadership is the ability to communicate something people want." -- Dianne Feinstein

Daily Tech Digest - August 07, 2021

Facilitation Skills Just Might Be The Best Kept Leadership Secret

Not surprisingly, the International Association of Facilitators (IAF) insists that facilitative leadership is a particularly successful leadership approach. Host of IAF's Facilitation Impact Awards, Jeffer London explains that facilitative leaders use an inclusive style to tap individual potential. “They co-create and collaborate in order to get things done,” explains London. “The projects of facilitative leaders are done in an iterative manner that allows all individuals to contribute, evaluate and improve aspects of shared initiatives.” Furthermore, IAF insists that today’s workplace complexity requires leaders to lean into a more facilitative style. “In a world that can be seen as increasingly fragmented and chaotic, leaders who can build participation, alignment and meaning are finding more success,” insists Vinay Kumar, Chair of the International Association of Facilitators (IAF). “Leaders who understand how to invite people into a participatory environment, and use the group energy to innovate fit well into today's context. With the world calling for more inclusion and equity, tomorrow's leaders need to be facilitators.”


SEC charges crypto exchange execs for the first time over unregistered token sales

The SEC’s Friday order found that two executives from the Blockchain Credit Partners company used the Ethereum blockchain to sell cryptocurrencies to investors while misleading them about the company’s profitability. Specifically, investors purchased cryptocurrencies using digital assets like ether. The company then promised to pay investors over 6 percent in interest and that the funds would go toward physical investments like car loans to create additional income. The SEC determined that these “real-world” investments wouldn’t generate the income advertised. “Full and honest disclosure remains the cornerstone of our securities laws – no matter what technologies are used to offer and sell those securities,” Gurbir S. Grewal, SEC Enforcement Division director, said in a statement Friday. “This allows investors to make informed decisions and prevents issuers from misleading the public about business operations.” Friday’s charges against the company come as the federal government is preparing to issue new regulations for the decentralized finance and cryptocurrency markets.


Developers, DevOps, and cybersecurity: The top tech talent employers are looking for now

The report reads: "At the skills level, summary analysis across all job postings for all tech job roles suggests employers tend to seek well-rounded candidates. This also reflects the ever-expanding nature of innovation, whereby new platforms, new coding languages, new hardware and devices, new data streams and new combinations of technology building blocks (think IoT) are a de facto part of the job for any technology professional." This also explains why cybersecurity is often not specifically listed in skills reports, despite the fact employers increasingly expect baseline IT security knowledge from workers. Take the UK, for example: according to the 2021 City and Guilds Skills Index published in June, jobs postings for "cybersecurity technician" in the country increased by a massive 19,222% between April 2020 and April 2021, whereas roles for "cybersecurity engineer" grew by 292%. This compares to 312% growth in ads for "full-stack developer" during the same period, and a 184% increase in job postings for "Azure architect".


Is Trust or Innovation More Important for a Brand in 2021?

So, how do we avoid slip ups as we move into this more active period of positioning ourselves in this new normal? Problems happen when a brand positions (and therefore sells) itself as innovative or as socially conscious and doesn’t deliver. Consistency is what’s important in branding, and if you’re already innovative (this applies to most tech companies, hence those glass-half-full tech execs), stick with it. Just show your customers they can trust you to remain true to your brand values. A great way to make sure you are staying true to your brand is to come up with a list of brand values, plus a value proposition. This, combined with a description of who your target customer is, will keep you on the straight and narrow no matter what happens. A value proposition is just a couple of sentences about what your company offers, and why and how it’s uniquely qualified to offer that. You should be able to whip one up easily enough. In coming up with your values, value proposition and your target customer description, you should be able to work out where your brand should be positioned. 


The Future of Blockchain Will Be Interoperability

The biggest challenge for interoperability between blockchains is the programming language. The transaction schema as well as the consensus models differ for interconnection, and in some, a lot. The use of open protocols is presented as a possible solution to blockchain interoperability problems, as it allows universal interaction. ... The first breakthrough in this area, for Cardano, is with the Nervos blockchain, which is an open source, permissionless, PoW consensus protocol focused on creating a public, universal, interoperable network. IOHK (the developer of Cardano) and Nervos are working to build an interoperability bridge between the two networks, which will allow users to transact cross-asset transactions. But Cardano is also innovating interoperability from the programming language with IELE. In 2018 IOHK made agreements with Runtime Verification (the developer company), for its upgrade, as it actually started as a design language, and is working to reach steady state, as with KEVM. Since late 2020, Cardano developers have had a bridge to the Solidity/Ethereum community through the K Ethereum Virtual Machine (KEVM).


Spotlight on CockroachDB

CockroachDB is implemented as a distributed key-value store over a monolithic sorted map, to make it easy for large tables and indexes to function. While CockroachDB is a distributed SQL database, developers treat it as a relational database because it uses the same SQL syntax. But on an architecture level, CockroachDB’s architecture is different from a relational database architecture. In CockroachDB, every table is ordered lexicographically by key. So, when we store the data on the database, we are leveraging the key value store. Since CockroachDB has a distributed architecture, we just need to spin a node up of cockroach Database, point it at a cluster, and the database participates in that cluster. CockroachDB then coordinates with the nodes to gain consensus for all queries and transactions. When we spin up a node and point at the cluster, data is balanced out based on what you optimally want to do with that data.


How To Enhance IoT Security: Learning The Right Approach To A Connected Future

A risk-based approach is a mindset that allows you to improve the certainty of achieving outcomes by employing strategies or methods that consider threats and opportunities. This approach can be applied during operations while designing the process or product or at product improvement stages. Also, a Risk-based approach allows you to capture opportunities, prevent losses and improves entire operations throughout the organization. Therefore, it would be nothing wrong to say that considering a risk-based approach should be made a core element of quality management systems, performance excellence processes, including ISO 9001:2015. The approach could help you understand the risk matrix of your devices so that you can apply appropriate security controls in an IoT system. Updating firmware and software is an essential process if you plan to improve IoT security, as software updates offer plenty of benefits. For example, these might help in repairing security loopholes that might occur due to computer bugs.


Facebook Introduces New Platform For Building Robots

Droidlet lets researchers use different computer vision or NLP algorithms with their robots. In addition, they can use Droidlet to accomplish complex tasks in both real world or within a simulated environment like Minecraft or Habitat. Droidlet is capable of building embodied agents that can recognise, react, and navigate their surroundings. It simplifies the integration of various cutting-edge machine learning algorithms in these systems, allowing users to prototype new ideas faster than ever before. According to the research paper, “Droidlet: modular, heterogenous, multi-modal agents”, The objective of the platform is to build intelligent agents that can learn continuously from their encounters with the real world. The researchers hope that the platform for Droidlets may help to further their understanding of various areas of research including self-supervised learning, multi-modal learning, interactive learning, human-robot interaction, and lifelong learning.


Blockchain And IOT: The Next Frontier Of Device Connectivity

There are a few very promising angles that deserve significant exploration. The one that I’m most excited about is fusing blockchain and IoT. There are only two major players in this space at the moment: IOTA and IoTeX. An IoTeX blockchain-powered camera recently won the Consumer Electronics Show award for privacy and security. This is a significant step for smart devices and blockchain connectivity, and the highly competitive price point proves scalability and widespread adoption is no longer a problem that blockchains faced before. Even this camera, which represents a real step forward for both blockchain and smart devices, is only just scratching the surface. There are currently 770 million surveillance cameras in the world. As important as they are to many people, surveillance cameras aren’t the most abundant devices in our world. There are more than 5 billion cell phones, 1.4 billion refrigerators and nearly 2 billion televisions in circulation. The 40 billion device mark suddenly seems fairly doable.


What is cyber risk quantification, and why is it important?

Most will have some idea of what cyber risk quantification entails, but it's always good to be on the same page. Mark Tattersall, vice president of product management at LogicGate, in his blog The Business Case for Risk Quantification, does an excellent job of defining cyber risk quantification. To begin, he looks at project prioritization. "For many years projects have been prioritized based on qualitative assessments of likelihood and numerically weighted scales, whereas risk quantification supports more rigorous decision-making by quantifying the potential financial loss to your business due to a risk scenario," wrote Tattersall. "Risk quantification is a tactical tool used to help understand and evaluate key risk scenarios in order to make more informed decisions and determine the financial impact on your organization." Put simply, the idea behind quantification is to prioritize risks according to their potential for financial loss, thus allowing responsible people in a company to create budgets based on mitigation strategies that afford the best protection and return on investment.



Quote for the day:

"The task of the leader is to get his people from where they are to where they have not been." - Henry A. Kissinger

Daily Tech Digest - August 06, 2021

The Role of Business Architecture in Defining Data Architecture

Data architects can systematically examine the information concepts in the information map and define corresponding data entities for each of those concepts. There is no assumption that the data model and the information map will be identical. Data architects will apply data modeling techniques to formalize data entities as appropriate. The information map’s role is rather to provide business ecosystem transparency, delivering a business-driven perspective to ensure that data models and related deployments enable and do not hinder the organization they are meant to benefit. As data entities are defined, data architects can leverage information concept relationships to establish corresponding relationships among data entities in the data models. All information maps have a set of relationships that data architects may interrogate to derive their entity relationships. The next step is to attribute the data entities. Figure 5 depicts data attribute derivation using child capabilities defined under Agreement Management.


HTTP/2 Implementation Errors Exposing Websites to Serious Risks

To show how such an attack would work, Kettle pointed to an exploit he executed against Netflix where front-end servers performed HTTP downgrading without verifying request lengths. The vulnerability allowed Kettle to develop an exploit that triggered Netflix's back-end to redirect requests from Netflix's front-end to his own server. That allowed Kettle to potentially execute malicious code to compromise Netflix accounts, steal user passwords, credit card information, and other data. Netflix patched the vulnerability and awarded Kettle its maximum bounty of $20,000 for reporting it to the company. In another instance, Kettle discovered that Amazon's Application Load Balancer had failed to implement an HTTP/2 specification regarding certain message-header information that HTTP/1.1 uses to derive request lengths. With this vulnerability, Kettle was able to show how an attacker could exploit it to redirect requests from front-end servers to an attacker-controlled server. 


How to prepare your Windows network for a ransomware attack

Too many of us are still reliant on older server platforms that make it harder to roll out security solutions through Active Directory. We may have Server 2016 and Server 2019 servers in our network, but we’re not taking advantage of the security features of that domain functional level. Too many of us are still on older forest and domain functional levels because we have older servers or applications and a lack of testing that keep us from rolling out these newer features. Or we have vendors that won’t certify newer platforms and Active Directory features. Raising your forest level to 2016 provides many features that better protect the network such as privileged access management and automatic rolling of NTLM secrets on a user account. If your functional level is still 2008 R2, you don’t have a UI for the Active Directory recycle bin, which makes it easier for recovery. It also doesn’t allow you to get rid of an old security hole of unchanging passwords on your service accounts if you are still running 2008 R2 functional level.


Can the public cloud become confidential?

The Confidential Cloud is a secure confidential computing environment formed over one or more public cloud providers. Applications, data, and workloads within a Confidential Cloud are protected by a combination of hardware-grade encryption, memory isolation, and other services in the underlying host. Like micro-segmentation and host virtualization, resources within a Confidential Cloud are isolated from all processes and users in a default zero-trust posture. But the Confidential Cloud does more than isolate network communications, it isolates the entire IT environment used by a workload—including compute, storage, and networking. That enables support for virtually any application. Because Confidential Cloud protection is inextricably part of data, the protection extends wherever the data goes. Legacy enterprise perimeters are defined by physical appliances, but a Confidential Cloud’s perimeter is established by an inextricable combination of hardware isolation, encryption, and explicit least-privileged access policy. 


Why the future of service is hybrid

For many businesses though, this has led to employment issues, especially as the workforce ages. Knowledge loss is an increasingly common problem. According to the Service Council, 70% of service organisations say they would be burdened by the knowledge loss of a retiring workforce in the next five to 10 years, while 50% claim they are currently facing a shortage of resources to adequately meet service demand. Automation is great, but it will only go so far to help. Interestingly, the TSIA recently found that half of all field services organisations don’t have a formal career path in place for their field service engineers. This, in my view, is a huge point of unnecessary commercial risk. These organisations are not doing enough to prepare younger service techs for a mixed reality future – one where they will have to work more closely with digital technology and machines than any previous generation. It won’t happen by accident. There is certainly a need for an integral ‘system of record’ that captures accurate data about equipment ‘as maintained’. 


How to Recognise and Reduce HumanDebt

HumanDebt™ is the equivalent to Technical Debt but for people. All of the initiatives, the projects, the intentions we (the organisation) had to do better by our employees, but we abandoned halfway. All of the missed opportunities to make their lives and their work easier and more joyful. All of the empty talk on equality, respect, lack of blame, courage and trust. All of the missing focus on empowered teams and servant leadership. All of the lack of preoccupation or resources for building better team dynamics. All of the toxic culture created by these. That’s Human Debt. ... It is tempting to believe that this type of debt is the organisation’s problem only. Even more tempting is to believe that it only happens at that macro, cultural level and that that is the only level where it can be fixed. Both are fallacies though. It’s important that the organisation has a degree of recognition, which enables them to offer "organisational permission" and help, as there really is only one solid thing to start with - empower teams to work on their own dynamics and improve their happiness by giving them the resources they need to do so.


How to deal with a toxic teammate

Toxic behavior may have occurred less frequently or been less noticeable during the pandemic. “There has been more stress but also a lot of grace-giving and cutting-of-slack to account for whatever people have going on in their personal and professional lives,” Cuthbert says. “The water cooler is gone and hasn’t been replaced and there is less of a forum for those who are negative or unhappy.” But it can take numerous forms. “Motivating through fear and unattainable goals and timelines, obfuscating expectations and scope of job descriptions or projects, not clearly identifying the North Star and who is doing what, being inconsistent in holding people accountable, dominating, yelling, talking over others, and interrupting are all signs of toxic behavior,” Mattheis says. “Working remotely has not changed that reality. What it has done is adjust how it looks and feels as well as made it more difficult to speak to it and hold people accountable.” Like dealing with a toxic boss, responding to a peer’s unhealthy dynamics can be tricky, but there are constructive approaches for using emotional intelligence to address the issues and mitigate their impact on your own productivity and well being.


Chip shortage has networking vendors scrambling

The semiconductor industry is predicting a possible recovery in 2023. But who knows what demand will be at that time, Sadana said. Part of the problem is that current semiconductor foundry capacity is not adequate to meet the recent surge in global demand, wrote Baron Fung, industry analyst at Dell'Oro Group, in a recent blog. “The cost of servers and other data center equipment is projected to rise sharply in the near term partly due to the global semiconductor shortages,” Fung stated. “An increase of server average selling prices could approach the double-digit level that was observed in 2018, which was another period of tight supply and high demand. However, in the longer term, we anticipate that supply and demand dynamics could reach equilibrium and that technology transitions could drive market growth.” ... “We continue to proactively manage the supply chain, and our strategic relationship with Broadcom is helping us in this regard. Importantly, we have secured vendor commitments that will allow us to accelerate product delivery and bring down backlog as of Q2 and beyond,” Thomas stated.


Why businesses should embrace cloud-native development

Containers provide the infrastructure to realise a microservices architecture in practice. It provides individual standalone components for an app that can be independently replaced, changed, or removed without jeopardising the rest of your infrastructure. This is essential to realise the cloud-native vision because the completeness of a container package and its agnosticism to its environment ensures the portability needed for cloud-native apps – containerised apps can be deployed in whatever cloud environment you operate in, whether it be public, private, or hybrid. The use of containers in the cloud-native model thereby brings speed and scalability that cannot be achieved through traditional systems architecture, and addresses a fundamental business need: for changes in software to be applied quickly and seamlessly so that tasks can be completed efficiently and inexpensively. For all these reasons, containers are one of the biggest trends in enterprise software development


CISA's Easterly Unveils Joint Cyber Defense Collaborative

"To some extent, some of these activities are already going on across the federal government, but they're running largely in stovepipes. So the idea is that we bring together our partners in the government and our private sector partners to really mature this planning capability," Easterly said. Besides CISA and its parent organization, the Department of Homeland Security, other federal government participants will include the U.S. National Security Agency, U.S. Cyber Command and the FBI. Easterly announced nine companies have signed up to participate,: CrowdStrike, Palo Alto Networks, FireEye, Amazon Web Services, Google, Microsoft, AT&T, Verizon and Lumen. The JCDC will build on the relationships CISA has with Information Sharing and Analysis Centers, or ISACs, which represent various industries. The concept for the new initiative came from the Cyberspace Solarium Commission, which published its report in 2020 (see: Senate Approves Chris Inglis as National Cyber Director).



Quote for the day:

"Added pressure and responsibility should not change one's leadership style, it should merely expose that which already exists." -- Mark W. Boyer

Daily Tech Digest - August 05, 2021

Cybersecurity professionals: Positive reinforcement works wonders with users

Sai Venkataraman, CEO of SecurityAdvisor, in his Help Net Security article, The power of positive reinforcement in combating cybercriminals, said he wants management to rethink its approach and use positive reinforcement instead. "It's important to recognize that cognitive bias is part of the human brain's makeup and functionality," Venkataraman said in his introduction. "While these subconscious mental shortcuts make it difficult to change behaviors, it's not impossible." Cognitive bias is hands down the culprit. Charlotte Ruhl, in her Simple Psychology article What Is Cognitive Bias? defined cognitive bias as: "A subconscious error in thinking that leads you to misinterpret information from the world around you and affects the rationality and accuracy of decisions and judgments. "Biases are unconscious and automatic processes designed to make decision-making quicker and more efficient. Cognitive biases can be caused by a number of different things, such as heuristics (mental shortcuts), social pressures and emotions."


Hackers are using CAPTCHA techniques to scam email users

Researchers found that quantity continues to beat quality in email attacks. Proofpoint found that the highest number of clicks came from a threat actor linked to the Emotet botnet. “This total reflects their effectiveness and the sheer volume of emails they sent in each campaign,” the report notes. The group, whose infrastructure was knocked out by international law enforcement earlier this year, has gone virtually dormant since. Cybersecurity researchers also say that companies shouldn’t underestimate basic cyber hygiene in combatting ransomware. Hackers are increasingly turning to email to distribute initial malware that’s used later to download ransomware rather than using email as the initial attack vector. In 2020, Proofpoint detected 48 million emails that contained malware that was used to launch ransomware. Top threats detected by Proofpoint included names like The Trick, Dridex and Qbot. Concerns over ransomware have only skyrocketed in 2021 after a series of high-profile attacks against critical industries in the United States. 


To Protect Consumer Data, Don’t Do Everything on the Cloud

Restricting private data collection and processing to the edge is not without its downsides. Companies will not have all their consumer data available to go back and re-run new types of analyses when business objectives change. However, this is the exact situation we advocate against to protect consumer privacy. Information and privacy operate in a tradeoff — that is, a unit increase in privacy requires some loss of information. By prioritizing data utility with purposeful insights, edge computing reduces the quantity of information from a “data lake” to the sufficient data necessary to make the same business decision. This emphasis on finding the most useful data over keeping heaps of raw information increases consumer privacy. The design choices that support this approach — sufficiency, aggregation, and alteration — apply to structured data, such as names, emails or number of units sold, and unstructured data, such as images, videos, audio, and text. To illustrate, let us assume the retailer in our wine-tasting example receives consumer input via video, audio, and text.


Do You Have the Empathy to Make the Move to Architect?

Solution and API architects may focus on different levels of the stack, but also perform very similar roles. Usually, an architect is a more senior, but non-executive role. An architect typically makes high-level design decisions, enforces technical standards and looks to guide teams with a mix of technical and people skills. “Being an architect takes social skills built on the foundation of the technical,” said Keith Casey an independent contractor, API consultant and author of “The API Design Book.” “No matter how good at the socials you are, you need to have the technical. Have you built a system like this? Have you shipped a system like this? You can read cookbooks all day, until you’ve put that in the oven, you haven’t cooked. You actually have to succeed and fail a few times before you can really offer advice to everyone. Social has to come after the technical foundation.” While a developer likes to dig deep into the weeds of a particular product or language, an architect is ready to broaden their understanding of enterprise architecture and how it fits into the business as a whole.


California's privacy law raises risks of legal action and fines over data collection

The upcoming California Privacy Rights Act (CPRA) is considered a pioneer in data privacy and it strengthens the current California Consumer Privacy Act with stricter rules. Enforcement is also beefed up with the creation of the California Privacy Protection Agency (CPPA) plus the ability of individual Californians to file suits against companies for non-compliance. The law was passed November 2020 and it applies to any company of sufficient size that does business in California which includes online sales without requiring a physical location. California residents can request from a company how their personal data has been used, and for what purpose, and they can request that their personal data not be sold or demand it be deleted including any data that has been sold to third parties. Each company must also state if artificial intelligence was applied to any of their personal data, and if it was, what the logic was behind the AI. This is essentially asking for companies to reveal how their algorithms rank the data.


How to Explain Complex Technology Issues to Business Leaders

Business leaders generally trust their tech counterparts to successfully address and resolve all the necessary technical details. What colleagues most want is assurance that whatever technology IT is proposing delivers benefits that outweigh capital and operating expenses. "We need to rise above the technology itself to explain the impact it will have," Kelker said. Jerry Kurtz, executive vice president of insights and data, at IT advisory firm Capgemini North America, also stressed the importance of focusing on the project's potential business outcome and value. "Rather than getting into the details of the technology, challenge, or solution in technical terms, showcase the outcomes the solution can bring and how they will impact the business as a whole," he explained. "Once this has been accomplished, it's time to develop a roadmap to reach the agreed upon target state." Using analogies rooted in shared experiences is a good way to find a common ground with business leaders, advised Mike Bechtel, chief futurist at business and IT advisory firm Deloitte Consulting. 


How universities can facilitate blended learning through smart campus infrastructure

Smart campus infrastructure doesn’t only provide a reliable solution to short term connectivity issues, but it also offers long term scalability that can continuously be tweaked, upgraded and expanded to fit the institution’s needs as they shift. The ideal scenario would be to have low levels of latency on a high capacity network, creating breathing room so that any significant uptake in usage levels wouldn’t cause any issues. Alternative network providers (AltNets) can overprovision to ensure that this scenario plays out ideally for the university. By providing much more bandwidth than is needed, bottlenecks can be removed and users can enjoy a seamless connectivity experience. As broadband demand inevitably grows over time, optic kit can be upgraded in line with what is required. ... With Wi-Fi 6 deployed across the entire campus, the technology can take universities to new heights. Reliable, high speed connections implemented across the university would enable the student experience to take on a new form through third party deployments. Suddenly, smart homes can be utilised effectively across the entire campus. 

Recover from ransomware: Why the cloud is the way to go

Recovery in the cloud can happen before you ever need it. It starts with automatically and periodically performing an incremental restore of your computing environment to an IaaS vendor. This means your entire environment—including backups of both structured and unstructured data—is already restored before it’s needed. Yes, you will lose some amount of data depending on the window between the last restore and the ransomware attack, so you will need to decide up front how often you execute the pre-restore process to minimize the loss. You also need to agree on what amount of data loss is acceptable, which is officially referred to as your recovery point objective (RPO). Technically, this type of recovery doesn’t require the cloud, but using the cloud makes it financially feasible for most environments. Doing it with a physical data center requires the cosly route of paying for the data center before you need it. With the cloud you pay only for the storage associated with your pre-restored images. Cloud-friendly backup and DR products and services can proactively restore your entire environment to the cloud of your choice—once a day, once an hour, or continuously. 


Hybrid work model: 5 advantages

Organizations with the biggest productivity increases during the pandemic have supported and encouraged “small moments of engagement” among their employees, according to McKinsey. These small moments are where coaching, idea sharing, mentoring, and collaborative work happen. This productivity boost stems from training managers to reimagine processes and rethink how employees can thrive at work. Autonomy is the key to employee satisfaction: If you provide full autonomy and decision-making on how, where, and when your team members work, employee satisfaction will skyrocket. Autonomy is important for on-site workers, too. Employees who return to the office after over a year of setting their own schedule will need to feel that they are trusted to get work done without a manager standing by. At our company, mutual appreciation and positive assumptions are guiding principles. When we don’t see each other every day, it’s easy to make assumptions about other employees – we keep these assumptions positive, trusting that everyone is doing their best and making responsible decisions.

A New Approach to Securing Authentication Systems' Core Secrets

With SAML, user management is shifted from the service provider (SP) to an identity provider (IdP), and authentication and directory are decoupled from the service. Instead of worrying about dozens of different apps and their authentication measures, admins configure the IdP to verify all employees' identities. The SP and IdP only communicate with each other with a key pair: The IdP signs with the private key, and the SP verifies with the public key. A Golden SAML attack occurs when the attackers steal a private key from the identity provider and become a "rogue IdP," Be'ery said. This allows them to generate arbitrary access SAML tokens offline, within the attackers' environment. Doing this would let attackers access a system as any user, in any role, while bypassing security policies and MFA. They could also slip past access monitoring, if access is only monitored by the identity provider, Be'ery said. The security community saw this technique in the SolarWinds attack, which also marked the first publicly known use of Golden SAML in the wild, he noted. 



Quote for the day:

"Great things are not something accidental, but must certainly be willed." -- Vincent van Gogh

Daily Tech Digest - August 04, 2021

Thoughtfully training SRE apprentices: Establishing Padawan and Jedi matches

Learning via osmosis is very powerful. There is a lot of jargon and technical terms that are best learned just by hearing others use these terms in context. For example, if you ask someone who doesn’t work in technology to pronounce nginx, they will likely say this incorrectly. This is very common for new engineers too. It’s not a problem, it just means there is a lot to learn which experienced engineers may take for granted. What if you asked a group of people who don’t work in technology to spell nginx? I’m sure you’d get many different answers. How does this change in a remote world? Really, it’s the same. You’ll still be attending meetings and hearing new terms, you can still attend standup, and you can still continue to google the terms you don’t know to build your vocabulary. For example, imagine you are in a meeting on the topic of incident management and you are reviewing metrics as a team. As a new SRE apprentice you might wonder, what does MTTD mean? If you hear or see this term in a meeting you can quickly google it and learn on the job. 


Blockchain applications that will change the world in the next 5 years

Decentralized finance (DeFi) is another increasingly blossoming application of blockchain technology that is set to gain significant momentum in the next 5 years. In Q1 2021, the dollar value of assets under management by DeFi applications grew from roughly $20 billion to $50 billion. DeFi is a form of finance that removes central financial intermediaries, like banks, to offer traditional financial instruments that utilize smart contracts on blockchains. An example of DeFi in action is the plethora of new decentralized applications now offering easier access to digital loans — users can bypass strict requirements of banks and engage in peer-to-peer lending with other people around the world. The next five years are vital for DeFi and will see dramatic growth in its applications, regulatory compliance associated with the technology and its overall use. Celebrity investor Mark Cuban, who gained notoriety in the blockchain industry through his advocation of NFTs, has suggested that “banks should be scared” of DeFi’s rising popularity. 


The remote-working challenge: ‘There are huge issues’

From an employees’ perspective “WFH (Working From Home) has the potential to reduce commute time, provide more flexible working hours, increase job satisfaction, and improve work-life balance,” a recent study by the University of Chicago entitled Work from home & productivity: evidence from personnel & analytics data on IT professionals noted. That’s the theory, but it doesn’t always work out like that. The researchers tracked the activity of more than 10,000 employees at an Asian services company between April 2019 and August 2020 and found that they were working 30 per cent more hours than they were before the pandemic, and 18 per cent more unpaid overtime hours. But there was no corresponding increase in their workload, and their overall productivity per hour went down by 20 per cent. Employees with children, predictably perhaps, were most affected – they worked 20 minutes per day more than those without. More surprisingly, the employees had less focus time than before the pandemic, and a lot more meetings. “Time spent on co-ordination activities and meetings increased, but uninterrupted work hours shrank considerably.


Quantum Computing —What’s it All About

Quantum computers are a new type of computer that do calculations in a fundamentally different way. They will do certain calculations dramatically faster than current computers can. This will allow some business questions we currently answer infrequently to be answered faster and more often. It will also allow us to ask some questions we previously considered impossible to answer. And, as with any new technology, as we become clear on the capabilities, it will let us ask (and answer) questions that we had previously not even considered; solving the unknown unknowns, if you will. The clearest I’ve seen this concept laid out is in a simple diagram such as the below. The first time I saw this was an excellent presentation given by IBM’s Andy Stanford-Clark to Quantum London called “Quantum Computing: a guide for the perplexed”. A fitting title and fascinating talk. The most visible scientific and engineering feats in this field at the moment are the designing and building of the quantum computing hardware.


Cluster API Offers a Way to Manage Multiple Kubernetes Deployments

The focus of Cluster API initially is on projects creating tooling and on managed Kubernetes platforms, but in the long run, it will be increasingly useful for organizations that want to build out their own Kubernetes platform, Burns suggested. “It facilitates the infrastructure admin being able to provision a cluster for a user, in an automated fashion or even build the tooling to allow that user to self-service and say ‘hey, I want a cluster’ and press a button and the cluster pops out. By combining Cluster API with something like Logic Apps on Arc, they can come up to a portal, press a button, provision a Kubernetes cluster, get a no-code environment and start building their applications, all through a web browser. ... “We’re maturing to a place where you don’t have to be an expert; where the person who just wants to put together a data source and a little bit of a function transformation and an output can actually achieve all of that in the environment where it needs to run, whether that’s an airstrip, or an oil rig or a ship or factory,” Burns said.


Behind the scenes: A day in the life of a cybersecurity expert

"My biggest challenge is how to determine what we need to work on next," Engel said. "There's only so much time in the world and you only have so much manpower. We have so many ideas that we want to execute and deliver to ensure security and privacy, and we don't like to rest on our laurels. We're not just going to say, 'oh, this is an eight out of 10, so we don't need to touch it anymore.' We want to be 10 out of 10 everywhere." As for the fun part, "analysis on security events is a lot of fun because it's my background," he said. "I love that kind of thing." While he can't go into details on this, because of security reasons, it's an around-the-clock job. The automated systems can contact Engel at any time if something highly critical occurs—"like having a burglar alarm at your house or something like that," he said. "Someone's attempting to break-in. And in this example, we are the police." "A common misconception about cybersecurity is that it's literally just two people sitting in a dark room waiting for a screen to turn red, and then they maybe flick a couple buttons," Engel said. "It couldn't be further from the truth."


What is DataSecOps and why it matters

Security needs to be bolted into DataOps, not an afterthought. This means building a cross team, ongoing collaboration between security engineering, data engineering and other relevant stakeholders, and not just at the end of a big project. This also means that the security of data stores needs to be understood and transparent to security teams. Number three, in the ever-changing data world, and with limited resources, prioritization is key. You should plan and focus on the biggest risks first. In data that often means knowing where your sensitive data is, which is not so trivial, and prioritizing it much higher in terms of projects and resources. Number four, data access needs to have a clear and simple policy. If things start getting too complicated or non-deterministic around data access permissions, and by non-deterministic, I mean that sometimes you may request access and get it, and sometimes you may not get it, you’re either being a disabler for the business data usage, or you’re exposing security risks.


Improving microservice architecture with GraphQL API gateways

API gateways are nothing new to microservices. I’ve seen many developers use them to provide a single interface (and protocol) for client apps to get data from multiple sources. They can solve the problems previously described by providing a single API protocol, a single auth mechanism, and ensuring that clients only need to speak to one team when developing new features.Using GraphQL API gateways, on the other hand, is a relatively new concept that has become popular lately. This is because GraphQL has a few properties that lend themselves beautifully to API gateways. GraphQL Mesh will not only act as our GraphQL API gateway but also as our data mapper. It supports different data sources, such as OpenAPI/Swagger REST APIs, gRPC APIs, databases, GraphQL (obviously), and more. It will take these data sources, transform them into GraphQL APIs, and then stitch them together. To demonstrate the power of a library like this, we will create a simple SpaceX Flight Journal API. Our app will record all the SpaceX launches we attended over the years. 


IT modernization: 5 truths now

Digital transformation got a lot of CIO attention this past year at events like the MIT Sloan CIO Symposium – and it still isn’t a product or a solution that anyone can buy. Rather, it’s best described as a continuous process involving new technologies, ways of working, and adopting a culture of experimentation. Fostering that culture leads to faster and more experimentation and the ability to arrive at better outcomes through continuous improvement. But just because the technology component is often not front and center (and shouldn’t be) in digital transformation projects doesn’t mean that a technology toolbox, including a foundational platform, is unimportant. Anything but. If you look back at some of the words in that digital transformation definition, it’s easy to see why traditional rigid platforms often intended to support monolithic long-lived applications might not fit the bill. Digital transformation is responsible in no small part for the acceleration of both containerized environments and the consumption of cloud services.


Building future-proof tech products and long-lasting customer relationships

When designing a new product, it is essential to not only look at current needs, but to anticipate changes in business and in computing models. A timeless design must include potential for expansion and adaptation as the industry evolves. Products designed in a non-portable manner are limited to specific deployment scenarios only, for example on-prem vs. cloud. Of course, predicting movements in the industry is not easy. Many people bet on storage tape going away, yet it is still found in many data centres today. Some jumped on a new trend too soon and failed. And others did not recognise the value in what they had created. Take Xerox, for example, which created then ignored the first personal computer. So, how do you go about creating enduring technology that makes a meaningful difference in people’s lives and businesses? First, stay close to analysts whose job it is to analyse the market and identify major trends. And, second, engage in deep conversations with end-users to truly understand their objectives and challenges. Here, it is essential to discuss customers’ evolving needs and future projects, then work to create a product that solves for both the short and long term.



Quote for the day:

"I think leadership's always been about two main things: imagination and courage." -- Paul Keating

Daily Tech Digest - August 03, 2021

Is remote working better for the environment? Not necessarily

When workers’ homes become their offices, commutes may fall out of the carbon equation, but what’s happening inside those homes must be added in. How much energy is being used to run the air conditioner or heater? Is that energy coming from clean sources? In some parts of the country during lockdown, average home electricity consumption rose more than 20% on weekdays, according to the International Energy Agency. IEA’s analysis suggests workers who use public transport or drive less than four miles each way could actually increase their total emissions by working from home. Looking further ahead, the questions multiply. Many Shopify employees live near the office and walk, bike or take public transit. Will remote work mean they move from city apartments to sprawling suburban homes, which use, on average, three times more energy? Will they buy cars? Will they be electric or gas-powered SUVs? “You have company control over what takes place in the office,” Kauk noted. “When you have everyone working remotely from home, corporate discretion is now employee discretion.”


Modernizing your applications with containers and microservices

There are many reasons to learn and design with serverless microservices, but that doesn’t mean they are perfect for every situation – just like microservices in general. If your workloads are stable or predictable in size, you generally won’t receive the financial benefits of running in a serverless environment over the long-term in contrast to unpredictable workloads and serverless platforms scaling in response. Additionally, one downside of serverless and functions-as-a-service is magnified when you have stateful microservices that either require a longer “cold start” time when starting from scratch or something that requires long-term in-memory state management. One final caveat for serverless offerings is the implicit caution against vendor lock-in when using cloud provider-specific serverless offerings, which can lead to deeply integrated architectural decisions that can be impacted severely should the offering change capabilities, requirements, or pricing. 


The cybersecurity jobs crisis is getting worse

"Cybersecurity is seen as a cost centre to the business -- something you have to do, but only to a minimal degree, like paying the light bill. We need to shift the conversation to aligning our security programs with the business," says Alexander. "Businesses have a tendency to invest in things they see value in. We need to ensure they see the value in our cybersecurity programs -- including people, training and technology," she added. People and training are a key issue here: technology changes fast and the methods cyber criminals use to break into networks are constantly evolving, so it's important for organisations not only to hire the right people, but also to invest in training them so they can continue in their jobs by reacting to the latest threats and dealing with new forms of technology. But that doesn't start with employers: in order to ensure there are enough people to fill cybesecurity jobs going forward, education and training pathways are needed. "At a societal level, we have to do more to educate school age children about cybersecurity and career opportunities," says Jon Oltsik, Senior Principal Analyst and ESG Fellow.


Turning Microservices Inside-Out

Outbound events are already present as the preferred integration method for most modern platforms. Most cloud services emit events. Many data sources (such as Cockroach changefeeds, MongoDB change streams) and even file systems (for example Ceph notifications) can emit state change events. Custom-built microservices are not an exception here. Emitting state change or domain events is the most natural way for modern microservices to fit uniformly among the event-driven systems they are connected to in order to benefit from the same tooling and practices. Outbound events are bound to become a top-level microservices design construct for many reasons. Designing services with outbound events can help replicate data during an application modernization process. Outbound events are also the enabler for implementing elegant inter-service interactions through the Outbox Patterns and complex business transactions that span multiple services using a non-blocking Saga implementation.


Kubernetes Expands From Containers To Infrastructure Management

Google engineers and others at vendors like Portworx understood that extensions were needed to enable Kubernetes to do such jobs as manage compute allocations, data security and networking, so the CNI (container network interface) and CSI (container storage interface) were created, leading to “a new avatar for the second coming of Kubernetes,” he says. “Kubernetes was originally – and still is, obviously – being used to manage containers,” Thirumale says. “But with these extensions of CNI, CSI and security extensions, Kubernetes can actually be used to manage data and storage and manage networking and all of that. If I were to put a Kubernetes layer in the middleware layer, looking upwards, it’s managing where the containers land. But looking down, it’s actually now managing infrastructure. There’s a whole new way of managing infrastructure. The traditional way was you had to go to the storage admin and say, ‘Give me five more nodes and give it to me in these terabytes and with this capability and all of that that,’ then they’d provision your EMC box or a Pure box or NetApp box or what have you.”


How tech pros perceive the evolving state of risk in the business environment

This year’s study reveals the immense opportunity ahead for tech pros and IT leadership to align and collaborate on priorities and policies to best position not only individual organizations but the industry at large to succeed with a future built for risk preparedness. “Technology professionals today are under even greater pressure to ensure optimized, secure performance for remote workforces while facing limited time and resources for personnel training. When it comes to risk management and mitigation, prioritizing intentional investments in technology solutions that meet business needs is critical,” said Sudhakar Ramakrishna, President and CEO, SolarWinds. “More than ever before, tech pros must partner closely with business leaders to ensure they have the resources and headcount necessary to proactively address security risks. And more importantly, tech pros should constantly assess their risk management, mitigation, and protocols to avoid falling into complacency and being ‘blind’ to risk.”


Is DeepMind’s new reinforcement learning system a step toward general AI?

The combination of reinforcement learning and deep neural networks, known as deep reinforcement learning, has been at the heart of many advances in AI, including DeepMind’s famous AlphaGo and AlphaStar models. In both cases, the AI systems were able to outmatch human world champions at their respective games. But reinforcement learning systems are also notoriously renowned for their lack of flexibility. For example, a reinforcement learning model that can play StarCraft 2 at an expert level won’t be able to play a game with similar mechanics (e.g., Warcraft 3) at any level of competency. Even slight changes to the original game will considerably degrade the AI model’s performance. “These agents are often constrained to play only the games they were trained for – whilst the exact instantiation of the game may vary (e.g. the layout, initial conditions, opponents) the goals the agents must satisfy remain the same between training and testing. Deviation from this can lead to catastrophic failure of the agent,” DeepMind’s researchers write in a paper that provides the full details on their open-ended learning.


Zoom Agrees to Settle Security Lawsuit for $85 Million

The lawsuit stems from users' complaints about the company's data privacy and security practices, including instances in which customers had their video conferences interrupted by "Zoom bombing," in which attackers gained access to meeting passwords or bypassed security features and disrupted the proceedings with profanity and offensive images. During the COVID-19 global pandemic, many organizations have turned to Zoom and other tech firms for video conferencing and collaboration services, which led to an increase in hacking attempts. At one point, the U.S. Justice Department warned that prosecutors could bring federal charges against those who disrupted meetings through Zoom bombing. In April 2020, an analysis by Citizen Lab, a group based at the University of Toronto that studies surveillance and its impact on human rights, found that although Zoom advertised that it used full end-to-end encryption, the company only deployed the inadequate AES-128 encryption standard within its cloud-based videoconferencing platform.


The surprising link between creativity and risk

Though the connection between creativity and risk-taking seems intuitive, social scientists have struggled to show a direct link between the two. That’s because measuring creativity itself has proven to be devilishly difficult. “Past studies which aimed to explore the relationship between creativity and risk-taking have equated creativity to measures such as associational fluency, divergent thinking, tolerance of ambiguity, creative lifestyle, or intellectual achievements,” psychologists Vaibhav Tyagi, Yaniv Hanoch, Stephen D. Hall, and Susan L. Denham of the University of Georgia and Mark Runco of Plymouth University in the UK wrote in 2017, in Frontiers in Psychology. But, they added, “each of these measures only provides a narrow insight into some aspects of creativity.” Adopting a different approach, the researchers looked at creativity as a multidimensional trait involving self-described personality and creative achievements, ideation (the process of forming new ideas), association formation, and problem-solving, among other qualities.


The Ethical Challenges Of AI In Defence

The chief concern of using AI in defence and weaponry is that it might not perform as desired, leading to catastrophic results. For example, it might miss its target or launch attacks that are not approved, lead to conflicts. Most countries test their weapons systems reliability before deploying them in the field. But AI weapon systems can be non-deterministic, non-linear, high-dimensional, probabilistic, and continuously learning. For testing a weapon system with such capabilities, traditional testing and validation techniques are insufficient. Furthermore, the race between the world’s superpowers to outpace each other has also made people uneasy as countries might not play by the norms and consider ethics while designing weapons systems, leading to disastrous implications on the battlefield. As defence starts leaning towards technology, it becomes imperative that we evaluate the loopholes of AI-based defence technologies that bad actors might exploit. For example, adversaries might seek to misuse AI systems by messing with training data or figuring out ways to gain illegal access to training data by analysing the specifically tailored test inputs.



Quote for the day:

"True leaders bring out your personal best. They ignite your human potential" -- John Paul Warren