Daily Tech Digest - August 14, 2021

Embedded finance won’t make every firm into a fintech company

One fintech’s choices on these matters may be completely different from another if they address different segments — it all boils down to tradeoffs. For example, deciding on which data sources to use and balancing between onboarding and transactional risk look different if optimizing for freelancers rather than larger small businesses. In contrast, third-party platform providers must be generic enough to power a broad range of companies and to enable multiple use cases. While the companies partnering with these services can build and customize at the product feature level, they are heavily reliant on their platform partner for infrastructure and core financial services, thus limited to that partner’s configurations and capabilities. As such, embedded platform services work well to power straightforward commoditized tasks like credit card processing, but limit companies’ ability to differentiate on more complex offerings, like banking, which require end-to-end optimization. More generally and from a customer’s perspective, embedded fintech partnerships are most effective when providing confined financial services within specific user flows to enhance the overall user experience.


Company size is a nonissue with automated cyberattack tools

As mentioned earlier, cybercriminals will change their tactics to derive the most benefit and least risk to themselves. Dark-side developers are helping matters by creating tools that require minimal skill and effort to operate. "Ransomware as a Service (RaaS) has revolutionized the cybercrime industry by providing ready-made malware and even a commission-based structure for threat actors who successfully extort a company," explains Little. "Armed with an effective ransomware starter pack, attackers cast a much wider net and make nearly every company a target of opportunity." A common misconception related to cyberattacks is that cybercriminals operate by targeting individual companies. Little suggests cyberattacks on specific organizations are becoming rare. With the ability to automatically scan large chunks of the internet for vulnerable computing devices, cybercriminals are not initially concerned about the company. ... Little is very concerned about a new bad-guy tactic spreading quickly — automated extortion. The idea being once the ransomware attack is successful, the victim is threatened and coerced automatically.


Paying with a palm print? We’re victims of our own psychology in making privacy decisions

Unfortunately we’re victims of our own psychology in this process. We will often say we value our privacy and want to protect our data, but then, with the promise of a quick reward, we will simply click on that link, accept those cookies, login via Facebook, offer up that fingerprint and buy into that shiny new thing. Researchers have a name for this: the privacy paradox. In survey after survey, people will argue that they care deeply about privacy, data protection and digital security, but these attitudes are not supported in their behaviour. Several explanations exist for this, with some researchers arguing that people employ a privacy calculus to assess the costs and benefits of disclosing particular information. The problem, as always, is that certain types of cognitive or social bias begin to creep into this calculus. We know, for example, that people will underestimate the risks associated with things they like and overestimate the risks associated with things they dislike.


Ransomware Payments Explode Amid ‘Quadruple Extortion’

“While it’s rare for one organization to be the victim of all four techniques, this year we have increasingly seen ransomware gangs engage in additional approaches when victims don’t pay up after encryption and data theft,” Unit 42 reported. “Among the dozens of cases that Unit 42 consultants reviewed in the first half of 2021, the average ransom demand was $5.3 million. That’s up 518 percent from the 2020 average of $847,000,” researchers observed. More statistics include the highest ransom demand of a single victim spotted by Unit 42, which rose to $50 million in the first half of 2021, up from $30 million last year. So far this year, the largest payment confirmed by Unit 42 was the $11 million that JBS SA disclosed after a massive attack in June. Last year, the largest payment Unit 42 observed was $10 million. Barracuda has also tracked a spike in ransom demands: In the attacks that it’s observed, the average ransom ask per incident was more than $10 million, with only 18 percent of the incidents involving a ransom demand of less than that.


How a Simple Crystal Could Help Pave the Way to Full-scale Quantum Computing

For more than two decades global control in quantum computers remained an idea. Researchers could not devise a suitable technology that could be integrated with a quantum chip and generate microwave fields at suitably low powers. In our work we show that a component known as a dielectric resonator could finally allow this. The dielectric resonator is a small, transparent crystal which traps microwaves for a short period of time. The trapping of microwaves, a phenomenon known as resonance, allows them to interact with the spin qubits longer and greatly reduces the power of microwaves needed to generate the control field. This was vital to operating the technology inside the refrigerator. In our experiment, we used the dielectric resonator to generate a control field over an area that could contain up to four million qubits. The quantum chip used in this demonstration was a device with two qubits. We were able to show the microwaves produced by the crystal could flip the spin state of each one.


How To Transition from a Data Analyst into a Data Scientist

What do you want to be – a data analyst or a data scientist? Do you need such a transition? Why do you need this shift of being a data scientist? The most important question that might haunt most analysts would be ‘how do you want to see your career graph grow?’ This is where the big difference comes in. With a choice of path that will make you a data scientist, your career becomes more challenging with new possibilities to design learning models which will set your skills apart from the herd. Keep aside time to study research papers by prominent data scientists. Most of these will be readily available on the internet free of cost. Find your areas of interest and subjects of your inclination in the field, and take notes. When you spend large sections of your time understanding data science, you must validate your learning with facts. You will find such facts when you read the works of prominent computer and data scientists like Geoffrey Hinton, Rachel Thomas, and Andrew Ng, among many established experts who contributed to data science with their studies in ML, neural networks, and tools for designing models.


Philips study finds hospitals struggling to manage thousands of IoT devices

Hospital cybersecurity has never been more crucial. An HHS report found that there have been at least 82 ransomware incidents worldwide this year, with 60% of them specifically targeting US hospital systems. Azi Cohen, CEO of CyberMDX, noted that hospitals now have to deal with patient safety, revenue loss and reputational damage when dealing with cyberattacks, which continue to increase in frequency. Almost half of hospital executives surveyed said they dealt with a forced or proactive shutdown of their devices in the last six months due to an outside attack. Mid-sized hospital systems struggled mightily with downtime from medical devices. Large hospitals faced an average shutdown time of 6.2 hours and a loss of $21,500 per hour. But the numbers were far worse for mid-sized hospitals, whose IT directors reported an average of 10 hours of downtime and losses of $45,700 per hour. "No matter the size, hospitals need to know about their security vulnerabilities," said Maarten Bodlaender, head of cybersecurity services at Philips.


Does it Matter? Smart home standard is delayed until 2022

Richardson said that one big reason for the delay is that the software development kit (SDK) needs more work. He also stressed that with most standards-setting efforts, the goal is to deliver a specification, not a functioning SDK that developers can implement to test and use to build products. This is true. There is a world of difference between functioning software and a written spec. A developer working on Matter who didn’t want to be named told me he wasn’t surprised by the delay, and thought it might actually help smaller companies, because it gives them more time to work with the specification and meet the product launches expected from Amazon, Google, and Apple with more fully developed products of their own. He also added that he thought the SDK performed well in a controlled environment, but still needed more work. I was less convinced by the CSA’s argument that adding more companies to the working group (back in May there were 180 members and now there are 209) had caused delays. By that logic, we may never see a standard. 


Methods for Saving and Integrating Legacy Data

The IT person tells management the legacy database has maybe another month before it completely crashes. This is bad news for management. The database has a huge amount of valuable data that needs to be transferred somewhere for purposes of storage, until a solution for transforming and transferring the legacy data to the new system can be found. Simply losing the data, which contains information that must be saved for legal reasons, and/or contains valuable customer information, would damage profits, and is unacceptable. Two options for saving the legacy data in an emergency are: 1) transforming the files into a generalized format (such as PDF, Excel, TXT) and storing the new, readable files in the new database, and 2) transferring the legacy data to a VM copy of the legacy database, which is supported by a cloud. Thomas Griffin, of the Forbes Technology Council, wrote “The first step I would take is to move all data to the cloud so you’re not trapped by a specific technology. Then you can take your time researching the new technology. Find out what competitors are using, and read to see what tools are trending in your industry.”


Is Your Current Cybersecurity Strategy Right for a New Hybrid Workforce?

To support a secure and productive hybrid workforce, enterprises need a technology platform that scales and adapts to their changing business requirements. This requires adopting a modular approach to support hybrid workers that include integrating zero trust network access (ZTNA) for access to private or on-premises applications, a multi-mode cloud access security broker (CASB) for all types of cloud services and web security on-device to protect user privacy. Securing corporate data on managed and BYOD devices are critical for businesses with hybrid workforces. ZTNA surmounts the challenges associated with VPN and provides greater protection. It uses the zero-trust principle of least privilege to give authorised users secure access to specific resources one at a time. This is accomplished through identity and access management (IAM) capabilities like single sign-on (SSO) and multi-factor authentication (MFA), as well as contextual access control.



Quote for the day:

"Leadership involves finding a parade and getting in front of it." - John Naisbitt

Daily Tech Digest - August 13, 2021

7 ways to harden your environment against compromise

Running legacy operating systems increases your vulnerability to attacks that exploit long-standing vulnerabilities. Where possible, look to decommission or upgrade legacy Windows operating systems. Legacy protocols can increase risk. Older file share technologies are a well-known attack vector for ransomware but are still in use in many environments. In this incident, there were many systems, including Domain Controllers, that hadn’t been patched recently. This greatly aided the attacker in their movement across the environment. As part of helping customers, we look at the most important systems and make sure we are running the most up-to-date protocols that we can to further enhance an environment. As the saying goes, “collection is not detection.” On many engagements, the attacker’s actions are clear and obvious in event logs. The common problem is no one is looking at them on a day-to-day basis or understanding what normal looks like. Unexplained changes to event logs, such as deletion or retention changes, should be considered suspicious and investigated.


Robocorp Makes Remote Process Automation Programmable

Robocorp Lab creates a separate Conda environment for each of your robots, keeping your robot and its dependencies isolated from the other robots and dependencies on your system. That enables you to control the exact versions of the dependencies you need for each of your robots. It offers RCC, a set of tools that allows you to create, manage, and distribute Python-based self-contained automation packages and the robot.yaml configuration file for building and sharing automations. Control Room provides a dashboard to centrally control and monitor automations across teams, target systems or clients. It offers the ability to scale with security, governance, and control. There are two options for Control Room: a cloud version and a self-managed version for private cloud or on-premises deployment. The platform allows users to write extensions or customizations in Python, a limitation with proprietary systems, according to Karjalainen, and to extend automations with third-party tools for AI, machine learning, optical character recognition or natural language understanding.


How Your Application Architecture Has Evolved

Distributed infrastructure on the cloud is great but there is one problem. It is very unpredictable and difficult to manage compared to a handful of servers in your own data center. Running an application in a robust manner on distributed cloud infrastructure is no joke. A lot of things can go wrong. An instance of your application or a node on your cluster can silently fail. How do you make sure that your application can continue to run despite these failures? The answer is microservices. A microservice is a very small application that is responsible for one specific use-case, just like in service-oriented architecture but is completely independent of other services. It can be developed using any language and framework and can be deployed in any environment whether it be on-prem or on the public cloud. Additionally, they can be easily run in parallel on a number of different servers in different regions to provide parallelization and high availability.


Satellites Can Be a Surprisingly Great Option for IoT

IoT technologies tend to have a few qualities in common. They're designed to be low-power, so that the batteries on IoT devices aren't sapped with every transmission. They also tend to be long-ranging, to cut down on the amount of other infrastructure required to deploy a large-scale IoT project. And they're usually fairly robust against interference, because if there are dozens, hundreds, or even thousands of devices transmitting, messages can't afford to be garbled by one another. As a trade-off, they typically don't support high data rates, which is a fair concession to make for many IoT networks' smart metering needs. ... Advancements in satellites are only accelerating the possibilities opened up by putting IoT technologies into orbit. Chief among those advancements is the CubeSat revolution, which is both shrinking and standardizing satellite construction. "We designed all the satellites when we were four people, and by the time we launched, we were about 10 people," says Longmier. "And that wasn't possible five years before we started."


Tech giants unite to drive ‘transformational’ open source eBPF projects

“It will be the responsibility of the eBPF Foundation to validate and certify the different runtime implementations to ensure portability of applications. Projects will remain independently governed, but the foundation will provide access to resources to foster all projects and organize maintenance and further development of the eBPF language specification and the surrounding supporting projects.” The new foundation serves as further evidence that open source is now the accepted model for cross-company collaboration, playing a major part in bringing the tech giants of the world together. Sarah Novotny, Microsoft’s open source lead for the Azure Office of the CTO, recently said that open source collaboration projects can enable big companies to bypass much of the lawyering to join forces in weeks rather than months. “A few years ago if you wanted to get several large tech companies together to align on a software initiative, establish open standards, or agree on a policy, it would often require several months of negotiation, meetings, debate, back and forth with lawyers … and did we mention the lawyers?” she said. “Open source has completely changed this.”


The Importance of Properly Scoping Cloud Environments

A CSP should be viewed as a partner in protecting payment data rather than the common assumption that all responsibility has been completely outsourced. The use of a CSP for payment security related services does not relieve an organization of the ultimate responsibility for its own security obligations, or for ensuring that its payment data and payment environment are secure. Much of this misunderstanding comes from simply not including payment data security as part of the conversation and how requirements, such as those in PCI DSS, will be met. ... Third-Party Service Provider Due Diligence: When selecting a CSP, organizations should vet CSP candidates through careful due diligence prior to establishing a relationship and explicit understanding of which entity will assume management and oversight of security. This will assist organizations in reviewing and selecting CSPs with the skills and experience appropriate for the engagement. 


The Difference Between Data Scientists and ML Engineers

The majority of the work performed by Data Scientists is in the research environment. In this environment, Data Scientists perform tasks to better understand the data so they can build models that will best capture the data’s inherent patterns. Once they’ve built a model, the next step is to evaluate whether it meets the project's desired outcome. If it does not, they will iteratively repeat the process until the model meets the desired outcome before handing it over to the Machine Learning Engineers. Machine Learning Engineers are responsible for creating and maintaining the Machine Learning infrastructure that permits them to deploy the models built by Data Scientists to a production environment. Therefore, Machine Learning Engineers typically work in the development environment which is where they are concerned with reproducing the machine learning pipeline built by Data Scientists in the research environment. And, they work in the production environment which is where the model is made accessible to other software systems and/or clients.


A remedial approach to destructive IoT hacks

Automating security is critical to scaling IoT technologies without the need to scale headcount to secure them. To keep up with manual inventory, patching and credential management of just one device it takes 4 man-hours per year. If an organization has 10,000 devices, that nets out to 40,000 man-hours per year to keep those devices secure. This is an impossible number of working hours unless the business has a staff of 20 dedicated to the cause. To continuously secure the thousands, or even tens of thousands, of devices on an organization’s networks, automation is necessary. With the mass scale of IoT devices and the opportunities to strike in every office and facility, automated identification, and inventory of each device so that security teams can understand how it communicates with other devices, systems and applications, and which people have access to it is crucial. Once identified, automation technology allows for policy compliance and enforcement by patching firmware and updating passwords, defending your IoT as thoroughly as your other endpoints.

Malicious Docker Images Used to Mine Monero

These malicious containers are designed to easily be misidentified as official container images, even though the Docker Hub accounts responsible for them are not official accounts. "Once they are running, they may look like an innocent container. After running, the binary xmrig is executed, which hijacks resources for cryptocurrency mining," the researchers note. Morag says social engineering techniques could be used to trick someone into using these container images. "I guess you will never log in to the webpage mybunk[.]com, but if the attacker sent you a link to this namespace, it might happen," he says. "The fact is that these container images accumulated 10,000-plus pulls, each." While it is unclear who’s behind the scheme, the Aqua Security researchers found that the malicious Docker Hub account was taken down after Docker was notified by Aqua Security, according to the report. Morag explains that these containers are not directly controlled by a hacker, but there's a script at entrypoint/cmd that is aimed to execute an automated attack. In this case, the attacks were limited to hijacking computing resources to mine cryptocurrency.


Leveraging the Agile Manifesto for More Sustainability

Often the first thing that comes to mind is the “sustainable pace,” as pointed out by the 8th principle of the Agile Manifesto: “Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.” So, sustainability in this sense will ensure people will not be burned out by an insane deadline. Instead, a sustainable pace ensures a delivery speed that can be kept up for an infinite time. This understanding of sustainability falls into the profit perspective of the triple bottom line. Another way sustainability is often understood in the agile community is by focusing on sustaining agility in companies. This means, agility and/or agile development will govern the work even after, for example, external consultants and trainers are gone. The focus is then on how to build a sustainable agile culture or on sustainable agile transformations. Over all these years, the agile manifesto has served me well in providing guidance, even for areas it hasn’t originally been defined for. 



Quote for the day:

"Leaders dig into their business to learn painful realities rather than peaceful illusion." -- Orrin Woodward

Daily Tech Digest - August 12, 2021

Three key areas to consider when settling technical debt

Software is an iterative product, and much of it has been developed over decades, by teams of workers with significant experience and institutional knowledge. These teams are also responsible for maintaining and managing older technologies and platforms. But as business priorities change over time, systems built on older code can be neglected. Software development teams’ attention turns elsewhere, either by choice or force – which can create disenfranchisement among staff if not managed correctly. When access to and knowledge of older code resides only among a few people, we see potential insider threat risk of particular concern if software is being used to run critical IT infrastructure. To that end, IT leaders must factor in succession planning into any strategic discussions they’re having. All workers eventually leave or retire, and if knowledge isn’t shared, you risk older systems becoming impossible to manage by newer employees. The importance of getting the basics right, such as applying updates and patches or managing configurations, never goes away, even for older systems.


Consistency, Coupling, and Complexity at the Edge

The key to understanding whether you should base your API design principles on REST or GQL is to grasp a concept in computer science known as Separation of Concerns (SoC). Well-designed yet non-trivial software is composed of many layers where each layer is segmented into many modules. If the SoC for each layer and module is clearly articulated and rigorously followed, then the software will be easier to comprehend and less complex. Why is that? If you know where to look for the implementation of any particular feature, then you will understand how to navigate the codebase (most likely spread across multiple repositories) quickly and efficiently. Just as REST and GQL queries provide consistency in API design, a clear SoC means that you have a consistent approach to where the implementation for each feature belongs. Developers are less likely to introduce new bugs in software that they understand well. It is up to the software architect to set the standard for a consistent SoC. Here is a common catalog of the various layers and what should go in each layer.


Certified ethical hacker: CEH certification cost, training, and value

While in the very early days of computing hacker was a value-neutral term for a curious and exploratory computer user, today most people use the word to describe bad guys who try to break into systems where they don't belong for fun or (usually) profit. An ethical hacker is someone who uses those hacking skills—the ability to find bugs in code or weaknesses in cyber defenses—for good, rather than for evil, tipping the potential victims off and using the insights gained to implement improved security measures. In some ways, the term "ethical hacker" arises from a milieu where many "black hat" bad guy hackers do in fact switch sides and become good guys and defenders rather than attackers. But it's also just a sexy term for a discipline that goes by other, more boring names like "penetration testing" or "offensive security research." You might also hear the term "red team" used—in large-scale penetration testing exercises, the red team plays the role of the attackers, while the blue team makes up the defenders. Still, whatever you call it, it's a job that's in demand: more and more companies are recognizing the business case for having in-house hackers probing their defenses for weakness, or using bug bounties to encourage freelance ethical hackers to find problems they may have missed.


5 steps for modernizing enterprise networks

Historically, network and security technologies were deployed independently with the latter typically being an overlay to the network. This was never ideal but worked well enough to stop the majority of breaches. Network engineers would design the network, and security professionals would deploy security tools at each point of ingress. One of the challenges today is that there are hundreds if not thousands of points of entry ranging from SaaS applications to VPN tunnels to guest access on Wi-Fi networks. Even if a business had infinite dollars, it would be impossible to deploy all the necessary security tools to defend each point. Another point of complexity is that the number of security tools continues to grow. In the past, firewalls and IDS/IPS systems were sufficient to protect an enterprise. Modern security includes those but also zero trust network access (ZTNA), secure web gateways (SWG), cloud access security brokers (CASB), endpoint and network detection-and-response, and other tools. One growing way to secure an enterprise is by embedding security into the network as a cloud service.


Next generation physicians reflect on overcoming barriers to digital transformation

Healthcare information systems struggle to replicate the achievements of sectors like banking and retail not only because of the increased regulatory scrutiny, but also because incentives are more complicated. "It’s not an 'I’m trying to sell you something, you’re trying to buy something' one-to-one relationship where you’re free to choose," said Dr. Stephanie Lahr, CIO and CMIO at Monument Health (formerly Regional Health). "We have payers in the middle of that construct, and that totally changes the dynamic of how those patients can come together and makes it difficult for us to look at airlines and banking and things like that [for examples]," said Lahr. "There’s a middle person with their own agenda and goals. … That’s one of the things that makes this difficult, because it’s not a free market." "The answer to every question is always time, money and motivation," said Dr. Yaa Kumah-Crystal, assistant professor of biomedical informatics and pediatric endocrinology at Vanderbilt University Medical Center. 


Digital transformation metrics: 8 counterintuitive lessons learned

Cybersecurity has long been considered by many executives to be a cost to be managed or even a drag on overall performance. Today, however, “the realization that cybersecurity has to be part of every discussion is more pervasive now than ever,” says Bentham. “Regulations, now employed in many countries, are driving the accountability to companies, making them liable for damages to citizens, customers and the like.” Thus, technology leaders must incorporate cybersecurity investments into their digital plans and ROI calculations. “The digital transformation strategist forges an early partnership with the cybersecurity organization and integrates them at all levels of the business and technology,” Bentham explains. “This integration allows the cyber professionals, who write or interpret cyber policies, to do so through a business lens.” As more organizations evolve to a cloud-first model, their security metrics may need to evolve as well. “Because the cloud is more dynamic, new metrics like mean time to adapt (MTTA) or mean time to secure (MTTS) will apply,” says Vishal Jain


Demystifying four aspects of launching an online business

Although social networks are a good tool to create valuable content, generate interaction with your customers, create a community around your brand and even expand your reach, it is essential that you have a website, integrated with your social networks, on the that you can have total control of the messages and images of your business and your products or services. On your own website, you can personalize the customer experience with the colors and design of your brand, make photo or video galleries, as well as create a personalized email that matches your company name, create marketing campaigns by email and even spice up your own online store. With the right service provider as a partner, you can link your website and online store with your social networks and even design the images and update the products that you show in them, directly from your website. Having your own website and online store to sell your products and services can help increase your customers' trust in your brand and make them commit to your business.


TestNG vs. JUnit Testing Framework: Which One Is Better?

JUnit was introduced in 1997 as an open-source Java-based framework for unit testing. It is a part of XUnit, which is a representation of the family of unit testing frameworks. It allows developers to write and run repeatable tests. It is used extensively along with Selenium for writing web automation tests. Its latest programmer-friendly version is JUnit 5, which creates a robust base for developer-based testing on the Java Virtual Machine. TestNG is also a Java-based unit testing framework developed in 2007 on the same lines of JUnit but with new and improved functionalities. These new functionalities include flexible test configuration, support for parameters, data-driven testing, annotations, various integrations, and many more. TestNG performs unit, end-to-end, and integration testing. TestNG generates reports that help developers understand the passed, failed, and skipped status of all the test cases. With the help of TestNG in Selenium, you can run failed tests separately using a testng-failed.xml file to run only failed test cases.


Five steps to strengthen your security posture

DevSecOps is a modern approach to software development which makes security an integral part of the software lifecycle right from the outset. Security teams are integrated into the development and operations teams, meaning that app security is not just an afterthought, but a fundamental part of the architecture. Here you will also empower the security teams to introduce new security capabilities that can enhance user experience. In the traditional approach, IT teams operate within silos that don’t necessarily communicate effectively with each other during a threat. Bottlenecks can occur as the buck is passed from security to development and back again, which has a detrimental effect on the ability to respond to threats in a timely fashion. When everyone’s on the same team, and security is built into the core of an app, your organisation can take a much more agile approach, and be better prepared for potential security breaches. To take full advantage of DevSecOps, your systems should make use of full-stack observability, the ability to monitor the entire IT stack from customer-facing applications down to core network and infrastructure.


Elevating cyber resilience and tackling government information security challenges

We can divide the challenge to two parts. The first challenge is developing a solution that will provide actional insights or an automated operation to reduce the “alert fatigue syndrome” which affects most of today’s security operations centers (SOCs). The second challenge is to recruit, train and maintain cyber professionals, and for that we need to develop and utilize advanced methodologies and technologies. When discussing national level cyber security operations center, we need to remember that national grade challenges require national grade solutions. These solutions have to incorporate several elements: state of the art technology; effective, field proven methodology; constant innovation, since the cyber domain is constantly evolving; collaboration (and I already elaborated about the Israeli Cyber Companies Consortium) and finally capacity buildup, addressing the human factor – training, certification and awareness. 



Quote for the day:

"It is time for a new generation of leadership to cope with new problems and new opportunities for there is a new world to be won." -- John E Kennedy

Daily Tech Digest - August 11, 2021

Solving 3 Pervasive Enterprise Continuous Testing Challenges

A primary goal of continuous testing is to determine if a release candidate is ready for production. As described above, you absolutely need to ensure that the changes in each release don’t break existing functionality. But you also need to test the new functionality to ensure that it works and meets expectations. Making the ultimate go/no-go release decision can be a bit of a guessing game when different teams are responsible for different components and layers of the application: the browser interface, the mobile experience, the various packaged apps at work behind the scenes (SAP, Salesforce, ServiceNow), and all the microservices, APIs and integration platforms that are probably gluing it all together. They’re likely developing new functionality at different cadences and testing their parts in different ways, using different testing practices and different tools. But the user doesn’t make those distinctions. They expect it all to just work, flawlessly. Moët Hennessy-Louis Vuitton (LVMH), the parent company behind luxury brands such as Christian Dior, TAG Heuer and Dom Perignon, recently decided to streamline its testing process to support ambitious plans for e-commerce growth.


Mind Over Matter: Revamping Security Awareness With Psychology

It's clear that traditional approaches to cybersecurity training have failed. From mistakenly disclosing account information to falling for phishing attacks, time and time again, an organization's sensitive data often leaks through legitimate channels with a worker's unknowing help — demonstrating that cybersecurity is increasingly a behavioral challenge. Instead of clinging onto measures that have repeatedly proven to be ineffective at safeguarding organizations, security leaders must redesign cybersecurity awareness with the human mind at the forefront. For that, we must turn to basic principles of psychology so we can better understand human behavior — and how we can positively influence it. While it's nearly impossible to unlearn these biases, we can improve our employees' understanding of cognitive biases to make it easier to identify and mitigate the impact of psychologically powered cyberattacks — and ultimately facilitate changes in individual cybersecurity behavior. 


Chaos Malware Walks Line Between Ransomware and Wiper

Chaos became more ransomware-ish with version 3.0, when it added encryption to the mix. This sample had the ability to encrypt files under 1 MB using AES/RSA encryption, and featured a decryptor-builder, according to the researcher. Then, in early August, the fourth iteration of Chaos appeared on the forum, with an expansion of the AES/RSA encryption feature. Now, files up to 2MB in size can be encrypted. And, operators can append encrypted files with their own proprietary extensions, like other ransomwares, according to the analysis. It also offers the ability to change the desktop wallpaper of their victims. Ransomware has been on the rise so far in 2021, with global attack volume increasing by 151 percent for the first six months of the year as compared with the year-ago half, according to a recent report. Meanwhile, the FBI has warned that there are now 100 different strains circulating around the world. The most-deployed ransomware in the wild is Ryuk, the report found, which could account for why the Chaos authors attempted to ride its coattails.


Cybersecurity is hands-on learning, but everyone must be on the same page

Most times, we see that cybersecurity “budget” is spread throughout so many other budgets throughout a company or organization. It isn’t owned within a cybersecurity group. This leads to separate strategies, goals, and implementations of cybersecurity thus really wasting that budget entirely. The larger problem of having no cybersecurity budget because “we’ve never had an incident” or “we aren’t a big enough target” is one that many will regret when it is too late. Everything and I mean everything is largely reliant on the internet these days. I challenge companies to start thinking about their most valuable assets, those assets that if they were to disappear or be messed up they would likely have no company. I can guarantee that most of those assets sit on a computer system somewhere. May that be a water system, the grid, a chemical formula, a shopping system, cloud infrastructure, data feeds, medical records, personal records, etc. Look at the cybersecurity budget as one would for regular home maintenance. 


Agile or Waterfall, which method should project developers adopt?

The IT and software industry was amongst the firsts to adopt this approach as often the end objectives (what their customer wants) keep changing and the flexibility afforded by the agile methodology is welcomed. With the successes achieved in various projects, eulogies have been overflowing for the agile method. With almost every industry evolving fast, gross uncertainties, and if the product under development is late to the market, the calls to adopt agile grow. It is impossible, on any given day, to not come across some article that attempts to show how agile can be adopted in yet another industry. The traditional approach adopted by most industries has been the waterfall method where the objective of the project is known in advance and the project progresses through identified stage gates. ... There is a plethora of reasons: new products, new processes, change in businesses, and so on. The decision on whether to proceed with a waterfall or agile method is more seen in product development projects where a company plans to enter a market with a product but may need to change track midway if market needs and expectations change.


Improving Testability: Removing Anti-patterns through Joint Conversations

There are many code patterns and anti-patterns that we know are good (and bad) for developers. Usually we look at them in terms of maintainability. But they have an impact on testability as well. Let’s start with an easy one. Let’s say we have a service that’s calling a database. Now, if the database properties are hard-wired into the code, every developer will tell you that’s a bad thing, because you can’t replace the database with an equivalent. In a testing scenario we might want to call a mock or local database, and hard coding a connection will impact our ability to either run the code completely, or call another one. In what we call pluggable architecture it’s easy to do this, but the code needs to be written like that in the first place. That’s a win for both testers and developers. In fact, many clean code practices and patterns improve both code maintainability and testability. Now let’s take a look at another aspect of pluggability. Our service now calls three other services and two databases. But we’re not interested in checking the whole integration.


OpenAI can translate English into code with its new machine learning software Codex

Of course, while Codex sounds extremely exciting, it’s difficult to judge the full scope of its capabilities before real programmers have got to grips with it. I’m no coder myself, but I did see Codex in action and have a few thoughts on the software. OpenAI’s Brockman and Codex lead Wojciech Zaremba demonstrated the program to me online, using Codex to first create a simple website and then a rudimentary game. In the game demo, Brockman found a silhouette of a person on Google Images then told Codex to “add this image of a person from the page” before pasting in the URL. The silhouette appeared on-screen and Brockman then modified its size (“make the person a bit bigger”) before making it controllable (“now make it controllable with the left and right arrow keys”). It all worked very smoothly. The figure started shuffling around the screen, but we soon ran into a problem: it kept disappearing off-screen. To stop this, Brockman gave the computer an additional instruction: “Constantly check if the person is off the page and put it back on the page if so.” This stopped it from moving out of sight, but I was curious how precise these instructions need to be.


Is Automation an Existential Threat to Developers?

“Initially, AI will augment developers, but eventually, it will replace some of them. ML/DL/AI can automate repetitive tasks, catch and correct errors, and vastly reduce the time needed to create a viable project,” says Rob Enderle, principal analyst at technology research firm Enderle Group. “These changes will significantly increase productivity, reducing much of the need for developers on a given project.” Meanwhile, automating tasks has been becoming easier to do than it once was. While automation scripting isn't a lost art, there are more tools available now that don't require it. In the case of software testing, there's even a name for it: “codeless test automation.” ... So, AI isn't an existential threat to developers, at least yet. Bear in mind that today's AI capabilities will not be the same as tomorrow's AI capabilities. The line in the sand between what developers do and what AI does will evolve over time. “DevOps skill requirements are so high that I don't see anything people are worried about. DevOps automation is the best example of that human plus machine augmentation,” says Rajendra Prasad


Six steps to stop manufacturers becoming the next ransomware headline

Many IoT components in use today do not have security resilience built into them, leaving even well-configured environments vulnerable and in need of additional protections. Cyber criminals have recognised both this weakness, and the lucrative opportunity presented by targeting manufacturers. In particular, the industry is highly vulnerable to disruptive attacks such as ransomware. An infection can quickly lead to an entire operation grinding to a halt as systems become inaccessible or are shut down in a bid to halt the spread. Criminals know that every minute of shutdown is painfully expensive for their victims, and manufacturers will be sorely tempted to pay the ransom. Such attacks have serious knock-on effects as entire supply chains are disrupted by resulting shortages. In May, a ransomware attack on US meatpacking company JBS shutdown all of its plants, cutting off the source of almost a quarter of the country’s beef. In another recent case, Palfinger, an Australian company specialising in hydraulic systems and loaders, was hit by a major ransomware attack that took down its IT systems across the world.


Stateful Workloads on Kubernetes with Container Attached Storage

Before the advent of Container Attached Storage, developers working with Kubernetes had to get creative with workarounds in order to handle stateful applications, according to Evans. “Developers have needed to rely on scripts and other home-developed automation that can be used to track the location of data,” Evans told The New Stack. “These solutions aren’t scalable and [are] subject to errors — and ultimately, data loss. Some CAS-type functionality can be achieved using external storage arrays, but the biggest difficulty is mapping the application to the external storage. “The only other alternative is to lock an application to a node, which defeats the purpose of scale-out resiliency.” When building at scale, these workarounds can significantly hinder developer velocity. To meet the needs of developers working with Kubernetes at scale, the CAS field has grown to include tools from PortWorx, Rancher, Robin, Rook, StorageOS and MayaData. OpenEBS, an open source CAS tool introduced by MayaData, has been a Cloud Native Computing Foundation (CNCF) sandbox project for two years.



Quote for the day:

"Little value comes out of the belief that people will respond progressively better by treating them progressively worse." -- Eric Harvey

Daily Tech Digest - August 10, 2021

Sky Computing, the Next Era After Cloud Computing

With multicloud being a priority for sky computing, a key challenge will be the buy-in of today’s market-leading cloud platforms — AWS, Microsoft and Google in particular. I asked Stoica which of the main platforms does he think will make the first move towards sky computing, and what would be their motivation? “Based on economics theory, presumably clouds that are second or third [in the market] — like Google — will be most likely to do it, because this is one way for them to get more market share. If they provide a faster or cheaper infrastructure, the sky would make it easier for them to get more workload from other clouds.” However, he also noted that application developers don’t necessarily need the permission of the big cloud platforms to attain “sky computing” functionality. “You can do it today. I can have an application — like say a machine learning pipeline — and do some data processing, some training, and some serving to serve the models. I can do the training on Google and the serving on Amazon.”


It's a Bird, It's a Plane, It's Blockchain

Amazon isn't the only major vendor to offer BaaS (Blockchain As A Service). For example, IBM leverages the TradeLens ecosystem to advance global trade with blockchain, preventing counterfeiting of pharmaceuticals and encouraging responsible sourcing of minerals. “TradeLens has already processed 42 million container shipments, nearly 2.2 billion events, and some 20 million documents,” said IBM in a statement. “In total, five of the top six global shipping carriers are now integrated onto the platform contributing to the digitization of documentation and automated workflows.” “Oracle is the enterprise blockchain dark horse,” wrote Alan Pelz-Sharpe of U.S.-based research firm Deep Analysis in a research note. “Its stealthy but deeply funded and well-sourced entry into the market follows Oracle’s well-established pattern: the firm has a history of first dismissing new technologies, only to work quietly and then launch into the new market with full force. That being said, with Oracle’s deep roots in the supply chain, financial services, and government sectors, blockchain always made more sense for it to embrace than for some of its competitors.”


The Next Evolution in Blockchain: Decentralized Identity

The first type of digital identifier in blockchain, the primitive one, is the one used for cryptocurrencies, which has a pair of asymmetric encryption keys, identifying the holder of the funds to dispose of those holdings, with the public key visible to all, and the private key, reserved for its holder. Coin transactions on some blockchains are traceable, i.e. the funds can be traced in the ledger register. For other networks, however, it is impossible, or at least difficult, to follow the sequence of the funds traded. These blockchains are referred to as privacy blockchains. Unlike Monero and Zcash, the most well-known privacy currencies that opted for the absence of traceability, Cardano maintains transparency and traceability over block records, as do many others, such as Bitcoin. Applications exist to prevent traceability on traceable blockchains. First proposed in 2013 by Greg Maxwell, CoinJoin is a method that combines multiple single-input single-output transactions into a single multiple-input multiple-output transaction. 


What are low-code databases?

It’s difficult to draw the line between a low-code database and any generic application. Many apps are just thin front ends wrapped around a database, so users may be storing their information in traditional databases without even realizing it. A layer of automation eases the flow, at least for common applications. Some open source toolkits are designed to make this simple. Drupal and Joomla, for instance, are content management systems designed to create databases filled with pages and articles. Drupal’s Webform module adds the ability to create elaborate surveys so users can input their own data. Other content management systems like WordPress can do much of the same thing, but they’re often more focused on building out blogs and other text documents. The major cloud services are adding tools and offering multiple ways to create an app that stores data in the cloud’s data services. Google’s AppSheet offers a quick way to thread together an app that is tightly integrated with the office products in G Suite. It is one replacement for App Maker, an earlier effort that recently shut down.


At Black Hat, mobile and open source emerge as key cybersecurity dangers

By its very nature, the open-source model is not set up for generating fully secure code. When you have millions of contributors from around the world, a freely usable resource of important software tools, and an ever-changing roster of maintainers, security can easily fall through the cracks. The problem is that threat actors know this as well and they are cashing in. The Equifax breach of 2017, which exposed the personal information of 147 million people, was attributed to an exploit of a vulnerability of an unpatched open-source version of Apache Struts. The threat landscape involves tools used by developers and where they store them. It was reported in December that two malicious software packages were published to NPM, a code repository used by JavaScript developers to share code blocks. In addition, an analysis by GitGuardian found 2 million “secret” passwords and identifying credentials stored in public Git repositories over 2020 alone. “Things are not getting better and on top of this, applications are growing in complexity,” said Jennifer Fernick


Security matters when the network is the internet

The move to the cloud has undermined the traditional model of the “nailed-up” private network. These days most organizations live in a hybrid cloud world where many key workloads sit in the public domain. As remote working becomes the norm, applications, people, and devices will continue to communicate externally, and the logic of channeling all that traffic through the corporate datacenter just for security enforcement alone becomes questionable. So, companies need to view security as an all-encompassing architecture and look to maintain consistent policies and protections for all users regardless of where they are working from. Remote working is a model that organizations were slowly moving towards for decades. Sure, the pandemic increased the speed and scope of its implementation dramatically, but it didn’t change the overall direction of travel. It has always been the case that who you are is more important than where you are, so access policies always should have been more about identity than location. 


Why Is Federated Learning Getting So Popular

Federated learning provides a decentralised computation strategy to train a neural model. Modern day mobile devices churn out swathes of personal data, which can be used for training. Instead of uploading data to servers for centralised training, phones process their local data and share model updates with the server. Weights from a large population of mobiles are aggregated by the server and combined to create an improved global model. The distributed approach has been shown to work with unbalanced datasets and data that are not independent or identically distributed across clients. On-device machine learning comes with a privacy challenge. Data recorded by cameras and microphones can put individuals at great risk in the event of a hack. For example, apps might expose a search mechanism for information retrieval or in-app navigation. Federated averaging was implemented by researchers from University of Kyoto in practical mobile edge computing (MEC) frameworks by using an operator of MEC frameworks to manage the resources of heterogeneous clients. 


Android Malware ‘FlyTrap’ Hijacks Facebook Accounts

The threat actors use a variety of come-ons: Free Netflix coupon codes, Google AdWords coupon codes, and voting for the best football/soccer team or player. They’re not only enticing; they’re slick, too, with high-quality graphics – all the better to hide what they’re doing behind the scenes. “Just like any user manipulation, the high-quality graphics and official-looking login screens are common tactics to have users take action that could reveal sensitive information,” zLabs researchers explained. “In this case, while the user is logging into their official account, the FlyTrap Trojan is hijacking the session information for malicious intent.” The bad apps purport to offer Netflix and Google AdWords coupon codes, or to let users vote for their favorite teams and players at UEFA EURO 2020: The quadrennial European soccer championship that wrapped up on July 11 (delayed a year by COVID-19). But first, before the malware apps dish out the promised goodies, targeted users are told to log in with their Facebook accounts to cast their vote or collect the coupon code or credits.


To create AGI, we need a new theory of intelligence

“Brains are always housed in bodies, in exchange for which they help nurture and protect the body in numerous ways,” he writes. Bodies provide brains with several advantages, including situatedness, sense of self, agency, free will, and more advanced concepts such as theory of mind and model-free learning. “A human AGI without a body is bound to be, for all practical purposes, a disembodied ‘zombie’ of sorts, lacking genuine understanding of the world including its human inhabitants, their motivations, habits, customs, behavior, etc. the agent would need to fake all these,” Raghavachary writes. Accordingly, an embodied AGI system would need a body that matches its brain, and both need to be designed for the specific kind of environment it will be working in. “We, made of matter and structures, directly interact with structures, whose phenomena we ‘experience.’ Experience cannot be digitally computed — it needs to be actively acquired via a body,” Raghavachary said. “To me, there is simply no substitute for direct experience.”


IT leadership: How to find more ways to pay it forward

Today, as Zoom meetings and video calls continue to be the primary form of communication, it’s critical to hone those active listening skills. For instance, you might think it’s fine to grab a drink while someone is speaking – but in those few moments that you’re distracted, you’re not actually hearing what’s being said, nor what’s left unsaid. Face-to-face conversations force you to dial in your attention, but it’s easy to lose that focus when meetings are virtual. When I meet with someone virtually, I minimize distractions by first resolving to be present in every conversation. With the amount of digital distraction we have in today’s world, we need to commit to focusing on ourselves and those we are meeting with. I stay in the moment by setting my phone aside, turning off notifications, and closing other windows and programs on my machines. While there are certainly some challenges to coaching others virtually, there are advantages as well. Some introverts, I’ve found, tend to feel more comfortable expressing their opinions during video calls because they’re not physically surrounded by others, and this puts them more at ease.



Quote for the day:

"The signs of outstanding leadership are found among the followers." -- Max DePree

Daily Tech Digest - August 09, 2021

Digital transformation depends on diversity

Diversity of skills, perspectives, experiences and geographies has played a key role in our digital transformation. At Levi Strauss & Co., our growing strategy and AI team doesn’t include solely data and machine learning scientists and engineers. We recently tapped employees from across the organization around the world and deliberately set out to train people with no previous experience in coding or statistics. We took people in retail operations, distribution centers and warehouses, and design and planning and put them through our first-ever machine learning bootcamp, building on their expert retail skills and supercharging them with coding and statistics. We did not limit the required backgrounds; we simply looked for people who were curious problem solvers, analytical by nature and persistent to look for various ways of approaching business issues. The combination of existing expert retail skills and added machine learning knowledge meant employees who graduated from the program now have meaningful new perspectives on top of their business value. 


The hottest hyper-automation trends disrupting business today

The global pandemic has highlighted a need for more flexible customer service, using digital channels, as well as the possibility of organisations delivering service without being tied down to a particular location. Both factors have driven increased adoption of hyper-automation, and have led to more differentiation in customer service joining the biggest trends in the space. According to Luis Huerta, vice-president and intelligent automation practice head, Europe at Firstsource, “as fixed-schedule, routine, processes and tasks are automated in the back-office, the need for staff to be tied to a specific location diminishes. Furthermore, with hyper-automation, the role of human colleagues switches from hands-on task execution to managing and monitoring bots, and dealing with complex business exceptions.  ... As end customers are increasingly able to leverage automated channels to solve their needs, the pressure on support staff reduces and we give front-line colleagues an ability to focus on complex enquiries where a human touch is critical.


How Drife and blockchain are disrupting the ride-sharing industry

Blockchain technology offers a way to make life and work easier, regardless of the industry or class, and the ride-sharing industry is one a lot of disruptors and companies in the blockchain space are looking to become major players in. There have been a lot of bold claims about giving drivers and users more freedom through the use of decentralized technology such as that of the blockchain. One of the companies that made this claim is Drife. Drife is a decentralized ride-sharing and peer-to-peer ride-sharing platform that was started with the intent of empowering the drivers and riders within its ecosystem. The app is built on the Aeternity blockchain and its business model is built on taking zero commission from its drivers. Drife will instead charge drivers an annual fee on its platform to access the app. “We believe when there’s a driver who spends 14 to 16 hours behind the wheel, he deserves to take back all the income to his home,” said Sheikh. ... While Uber, Lyft and others were formed with good intentions, they have become centralized, continuously paying their drivers less and charging their riders more.


AI Wrote Better Phishing Emails Than Humans in a Recent Test

Researchers have long debated whether it would be worth the effort for scammers to train machine learning algorithms that could then generate compelling phishing messages. Mass phishing messages are simple and formulaic, after all, and are already highly effective. Highly targeted and tailored “spearphishing” messages are more labor intensive to compose, though. That's where NLP may come in surprisingly handy. At the Black Hat and Defcon security conferences in Las Vegas this week, a team from Singapore's Government Technology Agency presented a recent experiment in which they sent targeted phishing emails they crafted themselves and others generated by an AI-as-a-service platform to 200 of their colleagues. Both messages contained links that were not actually malicious but simply reported back clickthrough rates to the researchers. They were surprised to find that more people clicked the links in the AI-generated messages than the human-written ones—by a significant margin. “Researchers have pointed out that AI requires some level of expertise. It takes millions of dollars to train a really good model,” says Eugene Lim 


Data warehousing has problems. A data mesh could be the solution

Simply stated, a data mesh invests ownership of data in the people who create it. They’re responsible for ensuring quality and relevance and for exposing data to others in the organization who might want to use it. A consistent and organization-wide set of definitions and governance standards ensures consistency, and an overarching metadata layer lets others find what they need. “Data mesh is the concept of data-aligned data products,” Dehghani said in a video introduction. “Find the analytical data each part of the organization can share.” Dehghani lists eight attributes of a data mesh. Elements must be discoverable, understandable, addressable, secure, interoperable, trustworthy and natively accessible and they must have value on their own. The concept of decentralized data management is nothing new. Distributed databases rode the coattails of the client/server craze in the 1990s. Part of the appeal of the Hadoop software library of a decade ago was that processing was distributed to where data lived. 


Why AI isn't the only answer to cybersecurity [Q&A]

The battle between an attacker and the defenders is exactly the reason where the human factor comes into play and AI helps those defenders to focus and make decisions that optimize their time and skills. What we're seeing today is basic technology that’s designed for very specific attacks. It's only in 0.1 percent of attacks that very sophisticated technology is being used. There are millions of attacks every day, so you'll see advanced techniques; whereas, nine million other attacks are happening that are just super rudimentary, garden variety ransomware attacks and viruses. The latter are the mass of the attacks, and they're also the mass of the damage. If you're a nuclear reactor, then somebody's going to do massive harm, but if you're an average SMB, then you're a lot more susceptible to those garden variety attacks that we call drive-bys. Those machines aren't cutting edge and those attacks aren’t either. They're just the common things that have been learned over the past few years. However, with the forefront of attacks and premium ATPs, it'll be a battle of wits between the advanced technology versus their technology. 


When Will Quantum Computing Finally Become Real?

It's important to remember that quantum computers aren't just faster computers, but harbingers of an entirely new type of computation. “If realized in the best possible way imaginable, they would fundamentally change the world as we know it,” says Tom Halverson, a staff quantum scientist on the quantum computing team at management and information technology consulting firm Booz Allen Hamilton. “Because of this, many powerful forces are positioning themselves to be ‘the first,’” he states. “When the quantum computing revolution happens, it will happen quickly.” Quantum computing is already real, but it's simply not yet practical, observes Mario Milicevic, an IEEE member and a staff communication systems engineer at MaxLinear, a broadband communications semiconductor products firm. He notes that IT leaders will need to understand whether a quantum computer is the appropriate tool for the type of problem their organization is trying to solve. “For the majority of problems, classical computers will actually outperform quantum computers and do so at a much lower cost,” Milicevic states.


New connections between quantum computing and machine learning in computational chemistry

A quantum computer, integrated with our new neural-network estimator, combines the advantages of the two approaches. A quantum computer, integrated with our new neural-network estimator, combines the advantages of the two approaches. While a quantum circuit of choice is being executed, we exploit the power of quantum computers to interfere states over an exponentially-growing Hilbert space. After the quantum interference process has worked its course, we obtain a finite collection of measurements. Then a classical tool—the neural network—can use this limited amount of data to still efficiently represent partial information of a quantum state, such as its simulated energy. This handing of data from a quantum processor to a classical network leaves us with the big question: How good are neural networks at capturing the quantum correlations of a finite measurement dataset, generated sampling molecular wave functions? To answer this question, we had to think about how neural network could emulate fermionic matter. Neural networks had been used so far for the simulation of spin lattice and continuous-space problems.


The obstacles VR will overcome to go mainstream for business users

The truth is that VR is not far off becoming an essential tool for helping businesses to become smarter and more efficient in the way they train staff. For example, vocational training provider Mimbus uses VR training for a range of skills including carpentry, construction, decorating, electrical engineering, and food processing. Working with HP VR hardware, the immersive nature of VR removes the pressures of getting things wrong in real life and increases confidence when it comes to performing skills on the job. This solution can help businesses significantly cut training costs. VR can also help businesses to communicate with clients and design new products and services. In fact, in a sales and marketing capacity, studies have shown that customers have a 25% higher level of focus when in a virtual space, showing that VR is a great way to capture customers’ attention. Alongside biosensors and AI, VR could be used in the future to test how drivers feel about a new car interior before it has been built, or improve the outcome of virtual meetings and collaboration by capturing the nonverbal cues of participants. 


Disentangling AI, Machine Learning, and Deep Learning

Expert systems were proving to be brittle and costly, setting the stage for disappointment, but at the same time learning-based AI was rising to prominence, and many researchers began to flock to this area. Their focus on machine learning included neural networks, as well as a wide variety of other algorithms and models like support vector machines, clustering algorithms, and regression models. The turning over of the 1980s into the 1990s is regarded by some as the second AI winter, and indeed hundreds of AI companies and divisions shut down during this time. Many of these companies were engaged in building what was at the time high-performance computing (HPC), and their closing down was indicative of the important role Moore’s law would play in AI progress. Deep Blue, the chess champion system developed by IBM in the later 1990s, wasn’t powered by a better expert system, but rather a compute-enabled alpha-beta search. Why pay a premium for a specialized Lisp machine when you can get the same performance from a consumer desktop?



Quote for the day:

"Leaders must be good listeners. It_s rule number one, and it_s the most powerful thing they can do to build trusted relationships." -- Lee Ellis