Daily Tech Digest - December 19, 2020

The road to successful change is lined with trade-offs

Leaders should borrow an important concept from the project management world: Go slow to go fast. There is often a rush to dive in at the beginning of a project, to start getting things done quickly and to feel a sense of accomplishment. This desire backfires when stakeholders are overlooked, plans are not validated, and critical conversations are ignored. Instead, project managers are advised to go slow — to do the work needed up front to develop momentum and gain speed later in the project. The same idea helps reframe notions about how to lead organizational change successfully. Instead of doing the conceptual work quickly and alone, leaders must slow down the initial planning stages, resist the temptation and endorphin rush of being a “heroic” leader solving the problem, and engage people in frank conversations about the trade-offs involved in change. This does not have to take long — even just a few days or weeks. The key is to build the capacity to think together and to get underlying assumptions out in the open. Leaders must do more than just get the conversation started. They also need to keep it going, often in the face of significant challenges. 


AI, ML can bolster cybersecurity, and vice versa, professor says

Machine learning and artificial intelligence for cybersecurity. It's like two ways we are tackling the problems. For example, machine learning has good results in machine vision, computer vision, or you can look at the games. For example, chess was built that was using machine learning and artificial intelligence. As a result the chess game, played by computer, beat the smartest human a couple of years ago. And it has been a very promising application of AI and machine learning. So at the same time, you can look at how machine learning algorithms are compromised. If you recall, there was a Tesla car speeding. And I think it slipped by at 55 miles on the road [with a speed limit] of 35 miles per hour, just because of a smart piece of tape [on a sign].  In that case we are trying to use the advantage or the benefits that AI and machine learning offer for securing systems that we are envisioning in the years to come. But again, at the same time, can we use the other way around too, right. That's why my research is focusing on AI and machine learning for cybersecurity. At the same time, cybersecurity for AI because we want to secure the system that is working for a greater good. Another example that I could explain is if you are using, let's say a machine learning algorithm to filter out the applicants from the application pool to hire somebody.


Low Code: CIOs Talk Challenges and Potential

Former CIO Dave Kieffer says that “cloud ERP can't be customized, but they can be extended. CRM's can provide full platforms. Extension, not customization, should be the goal no matter the platform.” However, Sacolick suggests when “CIO can seamlessly, integrate data and workflow with ERP and CRM, then customization will be less needed. It can become an architecture decision that provides a lot more business flexibility. Low Code and No Code apps, integrations are one approach.” Deb Gildersleeve agrees and says, “a lot of these legacy systems can’t be customized, and for those that can be, most organizations don’t want to spend the time, money and resources needed to customize them. That’s where low code can come in to complement these systems and work within your existing tech stack.” Francis, meanwhile, suggests, “there will likely always be need for an amount of high coding. Sacolick says, “I call highcoding, DIYCoding. But the real challenge is getting app developers on board with using low code /no code where it makes sense. Many really love coding, and some lose sight that their role is to provide solutions and innovations.”


Where are we really with AI and ML? 5 takeaways from the AI Innovators Forum

The reality is AI and ML can’t be applied to every scenario. For example, one company created a sophisticated ML approach to examining marketing analytics and performance so AI could predict the most effective marketing channels, but it required an incredible number of data integrations, and it wasn’t predictably better at suggesting marketing channels than existing experts using their expertise.  Likewise it is not yet possible to train enough scenarios of kids running in the street or a car swerving into your lane for self driving cars to learn and act accordingly. What is the safe default? Should the car stop when it doesn’t know what to do? Should it revert to a manual mode? These are just some of the ongoing challenges in training the ‘last 10%” of AI despite the vast majority of driving decisions being more efficient and accurate than a human driving on the road today. Beyond the more obvious self-driving car scenarios, many AI and ML use cases bring ethical considerations that should be taken seriously. Like any other technology at scale, there must be frameworks and guardrails to help understand potential impact, mitigation paths, and when to forgo the use of these technologies altogether. 


Ensuring Data Residency and Portability

Even if the server is based in California and stores only Indian data, it does not come within the sovereign jurisdiction of India or Indian courts. While it may be technically possible to isolate a set of data within the server that is deemed perverse or critical evidence and electronically “seal” it, the California company might not be interested in blocking large space within its server or servers for which it has invested millions, for a court case in faraway India. This is just one example of the limits of national laws versus the limitless, borderless movements of data, made possible by technology. Hence, data localisation is an issue that has been a hot topic for governments around the world. Alongside comes the issue of data portability. What does the right to data portability mean? This is a right that allows anybody who has put a set of data into one service or site, to obtain it from that service or site and reuse it for their own purposes across different services. The sense of portability is in the moving, copying and transferring personal data from one to another service without compromising on security. This right will also incorporate the right to have the quality of data undiminished or unchanged. Within this right is also incorporated the caveat that all such data will have only been gathered from the user with his or her consent.


Countries that retaliate too much against cyberattacks make things worse for themselves

“If one country becomes more aggressive, then the equilibrium response is that all countries are going to end up becoming more aggressive,” says Alexander Wolitzky, an MIT economist who specializes in game theory. “If after every cyberattack my first instinct is to retaliate against Russia and China, this gives North Korea and Iran impunity to engage in cyberattacks.” But Wolitzky and his colleagues do think there is a viable new approach, involving a more judicious and well-informed use of selective retaliation. “Imperfect attribution makes deterrence multilateral,” Wolitzky says. “You have to think about everybody’s incentives together. Focusing your attention on the most likely culprits could be a big mistake.” The study is a joint project, in which Sandeep Baliga, the John L. and Helen Kellogg Professor of Managerial Economics and Decision Sciences at Northwestern University’s Kellogg School of Management added to the research team by contacting Wolitzky, whose own work applies game theory to a wide variety of situations, including war, international affairs, network behavior, labor relations, and even technology adoption.


Driving autonomous vehicles forward with intelligent infrastructure

As cars are becoming more autonomous, cities are becoming more intelligent by using more sensors and instruments. To drive this intelligence forward, smart city IT infrastructure must be able to capture, store, protect and analyze data from autonomous vehicles. Similarly, autonomous vehicles could greatly improve their performance by integrating data from smart cities. In smart city planning, stakeholders must consider how they will enable the sharing of data in both directions, to and from autonomous vehicles, and how that data can be analysed and acted upon in real-time, so traffic keeps moving and drivers, passengers and pedestrians are kept safe. This means that a city needs physical infrastructure to handle the growing numbers of autonomous vehicles that will be on the streets and an IT infrastructure that can easily manage data storage, performance, security, resilience, mobilisation and protection from a central management console. For example, there’s a case to be made that cities should already be building networks of smart sensors along the roadside. These would have the capability to measure traffic conditions and potentially even monitor obstacles such as fallen trees, traffic collisions or black ice.


How teaching 'future resilient' skills can help workers adapt to automation

Automation itself isn’t a problem, but without a reskilling strategy it will be. Here’s how quality non-degree credentials can help. This fear of automation is not new. As the late Harvard professor Calestous Juma laid out in his seminal book Innovation and Its Enemies: Why People Resist New Technologies, technological progress has always come with some level of public concern. The bellhops feared automatic elevators and so did bowling pin resetters. Video did indeed “kill the radio star” and it wasn’t long before internet media streaming services made video retailers obsolete in the mid-2000s. This “creative destruction” means that automation-enabling technologies will destroy jobs, but they will also increase productivity, lower prices and create new (hopefully better) jobs too. Some have even advocated that in order to help low-income workers, we should speed up the automation of low-income jobs. Non-degree credentials can help workers adapt. Automation can change the world for the better, but only we if prepare for it. To be sure, non-degree credentials are no silver bullet to automation displacement. A number of policy recommendations can help our world transition to new, high-quality jobs.


ML-Powered Digital Twin For Predictive Maintenance — Notes From Tiger Analytics

In the last decade, the Industrial Internet of Things (IIoT) has revolutionised predictive maintenance. Sensors record operational data in real-time and transmit it to a cloud database. This dataset then feeds a digital twin, a computer-generated model that mirrors the physical operation of each machine. The concept of the digital twin has enabled manufacturing companies not only to plan maintenance but to get early warnings of the likelihood of a breakdown, pinpoint the cause, and run scenario analyses in which operational parameters can be varied at will to understand their impact on equipment performance. Several eminent ‘brand’ products exist to create these digital twins, but the software is often challenging to customise, cannot always accommodate the specific needs of every manufacturing environment, and significantly increases the total cost of ownership. ML-powered digital twins can address these issues when they are purpose-built to suit each company’s specific situation. They are affordable, scalable, self-sustaining, and, with the right user interface, are extremely useful in telling machine operators the exact condition of the equipment under their care.


Artificial intelligence stands at odds with the goals of cutting greenhouse emissions. Here’s why

What does this mean for the future of AI research? Things may not be as bleak as they look. The cost of training might come down as more efficient training methods are invented. Similarly, while data center energy use was predicted to explode in recent years, this has not happened due to improvements in data center efficiency, more efficient hardware and cooling. There is also a trade-off between the cost of training the models and the cost of using them, so spending more energy at training time to come up with a smaller model might actually make using them cheaper. Because a model will be used many times in its lifetime, that can add up to large energy savings. In my lab’s research, we have been looking at ways to make AI models smaller by sharing weights or using the same weights in multiple parts of the network. We call these shapeshifter networks because a small set of weights can be reconfigured into a larger network of any shape or structure. Other researchers have shown that weight-sharing has better performance in the same amount of training time. Looking forward, the AI community should invest more in developing energy-efficient training schemes.



Quote for the day:

"Any man who has ever led an army, an expedition, or a group of Boy Scouts has sadism in his bones." -- Tahir Shah

Daily Tech Digest - December 18, 2020

Chaos Engineering: A Science-based Approach to System Reliability

While testing is standard practice in software development, it’s not always easy to foresee issues that can happen in production. Especially as systems become increasingly complex to deliver maximum customer value. The adoption of microservices enables faster release times and more possibilities than we’ve ever seen before, however they introduce challenges. According to the 2020 IDG cloud computing survey, 92 percent of organizations’ IT environments are at least somewhat in the cloud today. In 2020, we saw highly accelerated digital transformation as organizations had to quickly adjust to the impact of a global pandemic. With added complexity comes more possible points of failure. The trouble is that we humans managing these intricate systems cannot possibly understand or foresee all of the issues because it’s impossible to understand how each of the individual components of a loosely coupled architecture will relate to each other. This is where Chaos Engineering steps in to proactively create resilience. The major caveat of Chaos Engineering is that things are broken in a very intentional and controlled manner while in production, unlike regular QA practices, where this is done in safe development environments. It is methodical and experimental and less ‘chaotic’ than the name implies.


ECLASS presents the Distributed Ledger-based Infrastructure for Industrial Digital Twins

Advancing digitalization, increasing networking and horizontal integration in the areas of purchasing, logistics and production, as well as in the engineering, maintenance and operation of machines and products, are creating new opportunities and business models that were unimaginable before. Classic value chains are turning more and more into interconnected value networks in which partners can seamlessly find and exchange the relevant information. Machines, products and processes receive their Digital Twins, which represent all relevant aspects of the physical world in the information world. The combination of physical objects and their Digital Twins creates so-called Cyber Physical Systems. Over the complete lifecycle, the relevant product information and production data captured in the Digital Twin must be available to the partners in the value chain at any time and in any place. The digital representation of the real world in the information world, in the form of Digital Twins, is therefore becoming increasingly important. However, the desired horizontal and vertical integration and cooperation of all participants in the value network across company boundaries, countries, and continents can only succeed on the basis of common standards


Data Protection Bill won’t get cleared in its current version

Pande from Omidyar Network India said stakeholders of the data privacy regulations should consider making the concept of consent more effective and simple. The National Institute of Public Finance and Policy (NIPFP) administered a quiz in 2019 to test how well urban, English speaking college students understand privacy policies of Flipkart, Google, Paytm, Uber, and WhatsApp. The students only scored an average of 5.3 out of 10. The privacy policies were as complex as a Harvard Law Review paper, Pande said. Facebook’s Claybaugh, however, said that “despite the challenges of communicating with people about privacy, we do take pretty strong measures both in our data policy which is interactive, in relatively easy-to-understand language compared to, kind of, the terms of service we are used to seeing.” Lee, who earlier worked with Singapore’s Personal Data Protection Commission said challenges of a (DPA) are “manifold”. She said it must be ensured that the DPA is “independent” and is given necessary powers especially when it must regulate the government. The DPA must be staffed with the right people with knowledge of technical and legal issues involved, she added.


India approves game-changing framework against cyber threats

The office of National Security Advisor Ajit Doval, sources said, noted that with the increasing use of Internet of Things (IoT) devices, the risk will continue to increase manifold and the advent of 5G technology will further increase the security concerns resulting from telecom networks. Maintaining the integrity of the supply chain, including electronic components, is also necessary for ensuring security against malware infections. Telecom is also the critical underlying infrastructure for all other sectoral information infrastructure of the country such as power, banking and finance, transport, governance and the strategic sector. Security breaches resulting in compromise of confidentiality and integrity of information or in disruption of the infrastructure can have disastrous consequences. Sources said that in view of these issues, the NSA office had recommended a framework -- 'National Security Directive on Telecom Sector', which will address 5G and supply chain concerns. Under the provisions of the directive, in order to maintain the integrity of the supply chain security and in order to discourage insecure equipment in the network, government will declare a list of 'Trusted Sources/Trusted Products' for the benefit of the Telecom Service Providers (TSPs).


The case for HPC in the enterprise

Essentially, HPC is an incredibly powerful computing infrastructure built specifically to conduct intensive computational analysis. Examples include physics experiments that identify and predict black holes. Or modeling genetic sequencing patterns against disease and patient profiles. In the past year, the Amaro Lab at UC San Diego performed modeling on the COVID-19 coronavirus to an atomic level using one of the top supercomputers in the world at the Texas Advanced Computing Center (TACC). I hosted a webinar with folks from UCSD, TACC and Intel discussing their work here. Those types of compute intensive workloads are still happening. However, enterprises are also increasing their demand for compute intensive workloads. Enterprises are processing increasing amounts of data to better understand customers and business operations. At the same time, edge computing is creating an explosive number of new data sources. Due to the sheer amount of data, enterprises are leveraging automation through the form of machine learning and artificial intelligence to parse the data and gain insights while making faster and more accurate business decisions. Traditional systems architectures are simply not able to keep up with the data tsunami.


5 reasons IT should consider client virtualization

First is the compatibility to run different operating systems or different versions of the same operating system. For example, many enterprise workers are increasingly running applications that are cross-platform such as Linux applications for developers, Android for healthcare or finance, and Windows for productivity. Second is the potential to isolate workloads for better security. Note that different types of virtualization models co-exist to support the diverse needs of customers (and applications in general are getting virtualized for better cloud and client compatibility). The focus of this article is full client virtualization that enables businesses to take complete advantage of the capabilities of rich commercial clients including improved performance, security and resilience. Virtualization in the client is different from virtualization in servers. It’s not just about CPU virtualization, but also about creating a good end-user experience with, for example, better graphics, responsiveness of I/O, network, optimized battery life of mobile devices and more. A decade ago, the goal of client virtualization was to use a virtual machine for a one-off scenario or workload.


The top 6 use cases for a data fabric architecture

A data fabric architecture promises a way to deal with many of the security and governance issues being raised by new privacy regulations and the rise in security breach incidents. "By far the largest positive impact of a data fabric for organizations is the focus on enterprise-wide data security and governance as part of the deployment, establishing it as a fundamental, ongoing process," said Wim Stoop, director of product marketing at Cloudera. Data governance is often seen in isolation, tied to a use case like tackling regulatory compliance needs or departmental requirements in isolation. With a data fabric, organizations are required to take a step back and consider data management holistically. This delivers the self-service access to data and analytics businesses demand to experiment and quickly drive value from data. Such a degree of management, governance and security of data then also makes proving compliance -- both industry and regulatory -- more or less a side effect of having implemented the fabric itself. Although this is not a full solution, it greatly reduces the effort associated with adhering to compliance requirements. Platz cautioned that there is a wide gulf between a vision for a perfect data fabric and what is practical today. "In practice, many first versions of data fabric architectures look more like just another data lake," Platz said.


Malicious Browser Extensions for Social Media Infect Millions of Systems

"This could be used to gather credentials and other sensitive corporate data from the websites visited by the victim," he says. "We are preparing a technical blog post with more technical information and IoCs, but for now, we can share the ... malicious domains." The malicious extensions are the latest attempt by cybercriminals to hide code in add-ons for popular browsers. In February, independent researcher Jamila Kaya and Duo Security announced they had discovered more than 500 Chrome extensions that infected millions of users' browsers to steal data. In June, Awake Security reported more than 70 extensions in the Google Chrome Web store were downloaded more than 32 million times and which collected browsing data and credentials for internal websites. In its latest research, Avast found the third-party extensions would collect information about users whenever they clicked on a link, offering attackers the option to send users to an attacker-controlled URL before forwarding them to their destinations. The extensions also collect the users' birthdates, e-mail addresses, and information about the local system, including name of the device, its operating system, and IP addresses.


How to use Agile swarming techniques to get features done

Teams that concentrate on individual skills and tasks end up with some members far ahead and others grinding away at unfinished work. For example, a back-end developer is still working on a feature, while the front-end developer for that feature has finished coding. The front-end developer then starts coding the next feature. The team can design hooks into the code to let the front-end developers validate their work. However, a feature is not done until a team completes the whole thing, fully integrates it and tests it. Letting developers move asynchronously through the project might result in good velocity metrics, but those measures don't always translate to the team delivering the feature on time. If testers discover issues in a delivered feature, the entire team must return to already completed tasks. Let this scenario play out in a real software organization, and you end up with partially completed work on many disparate tasks, and nothing finished. The goal of Agile development is not to ensure the team is 100% busy, with each person grabbing new product backlog items as soon as they complete their prior task. This approach to development results in extensive multitasking and ultimately slows the flow of completed items.


Application Level Encryption for Software Architects

Unless well-defined, the task for application-level encryption is frequently underestimated, poorly implemented, and results in haphazard architectural compromises when developers find out that integrating a cryptographic library or service is just the tip of the iceberg. Whoever is formally assigned with the job of implementing encryption-based data protection, faces thousands of pages of documentation on how to implement things better, but very little on how to design things correctly. Design exercises turn out to be a bumpy ride every time you don’t expect the need for design and have a sequence of ad-hoc decisions because you anticipated getting things done quickly: First, you face key model and cryptosystem choice challenges, which hide under “which library/tool should I use for this?” Hopefully, you chose a tool that fits your use-case security-wise, not the one with the most stars on GitHub. Hopefully, it contains only secure and modern cryptographic decisions. Hopefully, it will be compatible with other team’s choices when the encryption has to span several applications/platforms. Then you face key storage and access challenges: where to store the encryption keys, how to separate them from data, what are integration points where the components and data meet for encryption/decryption, what is the trust/risk level toward these components?



Quote for the day:

"Public opinion is no more than this: What people think that other people think." -- Alfred Austin

Daily Tech Digest - December 17, 2020

Danske Bank fights money laundering with AI

The bank has been on a mission to transform its anti-money laundering capabilities after substantial illegal transaction activity was uncovered at a Danske Bank subsidiary in Estonia. The Baltic state made global headlines when Danske Bank was found to have transferred more than €200bn of suspicious money through its tiny branch in Tallinn. Danske Bank’s transaction-focused AML-detection apparatus proved sluggish in identifying account and money transfer divergence anomalies at the branch in Estonia’s capital. During the pilot of the platform from Quantexa, Danske Bank improved its ability to detect suspicious activity in its market trading business in areas such as foreign exchange, securities and equities.  “Harnessing technology enables us to identify complex financial crime behaviours more effectively. Running advanced analytics on a wide range of datasets can help us better detect, investigate and prevent financial crime,” said Satnam Lehal, head of financial crime detection at Danske Bank.  The bank has now taken the pilot project to the next stage and integrated it with existing infrastructure.


Cybersecurity: A Bleak 'Progress' Report

The hack appears to be the result of the successful insertion of malware into SolarWinds' recent updates. The SolarWinds technology is used by the Pentagon, all five branches of the U.S. military, the State Department, NASA, the NSA, the Postal Service, the National Oceanic Atmospheric Administration, the Department of Justice and the Office of the President of the United States, among others. Plus, it's installed at thousands of companies and organizations around the world. Perhaps these events will serve as a wake-up call to our citizenry and bring the severity of our exposure to adversarial cyberattacks into a more focused picture. Today, I'm recalling watching "60 Minutes" in February 2015 - an episode titled: "DARPA: Nobody's Safe on the Internet" with Leslie Stahl interviewing some intriguing guests. A review of that segment helps to illustrate that some efforts back then were misguided, and cybersecurity remains very much a work in progress. The "60 Minutes" report said the Department of Defense had put Dan Kaufman, a former video game developer, in charge of inventing technology to secure the internet and given him a staff of 25 "brainiacs" and a budget of $500 million a year to "do something to help national security" and "make the world a better, safer, more secure place."


SolarWinds attack explained: And why it was so hard to detect

The attackers kept their malware footprint very low, preferring to steal and use credentials to perform lateral movement through the network and establish legitimate remote access. The backdoor was used to deliver a lightweight malware dropper that has never been seen before and which FireEye has dubbed TEARDROP. This dropper loads directly in memory and does not leave traces on the disk. Researchers believe it was used to deploy a customized version of the Cobalt Strike BEACON payload. Cobalt Strike is a commercial penetration testing framework and post-exploitation agent designed for red teams that has also been adopted and used by hackers and sophisticated cybercriminal groups. To avoid detection, attackers used temporary file replacement techniques to remotely execute their tools. This means they modified a legitimate utility on the targeted system with their malicious one, executed it, and then replaced it back with the legitimate one. A similar technique involved the temporary modification of system scheduled tasks by updating a legitimate task to execute a malicious tool and then reverting the task back to its original configuration.


3 Ways to Build a Buzzing Blockchain Community

Holding a hackathon is a sure-fire way of attracting programming talent and development activity to a blockchain platform. The concept is popular among many kinds of tech companies and developers. In the context of blockchain, a platform’s founders or operators will usually offer some type of incentive to participants in exchange for their creative contributions. Often, hackathons have been held as part of blockchain-developer conferences or events. However, with the social restrictions currently in place, many projects are forging ahead with remote hackathons, and with a high level of participation and success. ... Blockchain and cryptocurrencies are chock-full of jargon, acronyms and technical concepts that can be off-putting to many newcomers. Not to mention, starting to invest or trade in cryptocurrency comes with particular risks that everyone should be aware of before they start.  For this reason, those companies that can provide newcomers with a comprehensive library of educational materials have an opportunity to inspire loyalty from newcomers. One example is Indian cryptocurrency exchange Bitbns, which has created its own multimedia academy targeted at those eager to learn about cryptocurrency.


How To Support a Zero Trust Model with Automation

Zero Trust security architectures are underpinned by a segmented environment that limits the movements of attackers, isolating them and reducing their impact. In a Zero Trust environment all users, devices and applications are assigned a profile, based on techniques like digital identity, device health verification and application validation. They are then granted restricted access based on their profile. This prevents attackers from moving around freely, but it is also a precise and methodical process that provides and maintains genuine users and resources access, as and when required. Enforcing granular permissions based on the profile of the user or digital asset is a complex process. Automation provides the means to perform these actions programmatically and at scale. It also ensures that new technologies and solutions can be constantly added either to support, or replace, existing systems to ensure actions are performed in line with Zero Trust policies. Faced with a backdrop of emergent threats and an enterprise environment that is constantly evolving, automation provides the means to adapt quickly. Policies can be rewritten, while systems and processes can be reconfigured to deal with sudden changes or requests. This can be done at a micro level, dealing with very specific issues, or on a broader macro level, dealing with enterprise-wide issues.


VPNs, MFA & the Realities of Remote Work

As organizations have adapted to remote work and adopted new solutions, it's critical they understand how their architecture has changed in order to identify the evolving threat surface. But it's also important to realize that an IT architecture is like a fingerprint; there are some common types, but ultimately, they're unique. VPN is more effective for an on-premises environment, while MFA is more effective for a cloud-based setup. Let's take VPNs as an example. The most straightforward use case of a VPN is to establish a secure connection to access corporate infrastructure. You're at home, on your own wireless network, but you connect through a VPN. The VPN is protected by a firewall device to access the corporate network. This model works well for organizations that have a data center and file servers on-site because they can still leverage their network perimeter to protect it. However, VPN traffic can get more challenging when you consider the scale of larger organizations. Once hundreds of remote employees are connecting through VPN, the burden of moving data to a point which it can be distributed over network traffic can become significant.


CISOs should be ready to confront the psychology of cybersecurity in 2021

Clearly, we need to introduce more support around security awareness, understand the pressures that employees are under, whether that be self-inflicted or whether that be because of some external factors that are going on. This one is also the real challenge of security folks. We’re still not that good at that kind of emotional intelligence. We love a process, we love a policy. But we’re still not very great at this touchy, feely, fluffy emotional space. There’s a real role here for a human resources professional to get engaged to help deal with this one. ... There is a real leadership opportunity there to create the right environment that encourages people to talk about some of those issues. We’ve seen some real progress in that space. Because let’s face it we all have good days and bad days. I think encouraging people to talk about that, to share those things is hugely important as is encouraging people to take breaks, move away from the screen. We’ve moved into a realm that those kind of things are really important for us to be picking up on. Some of us are doing it quite naturally, perhaps, but they are not skillsets that are the strong suits for CISOs and security professionals.


Cost savings and security are key drivers of MSP adoption

When it comes to budgeting, the three areas of information technology noted as incurring the most expense by SMB and mid-market company executives are: information security (60%), data storage (48%), and data analytics/business intelligence (45%). The IT areas incurring the least expense in budgets are: application licensing (24%), digital transformation (21%), and data center automation (21%). “We conducted this survey to better understand the dynamics of how business executives perceive and select MSPs, as well as how MSPs can provide the greatest value to their SMBs and mid-market customers,” said Infrascale CEO Russell P. Reeder. “Based on the results, we recommend that MSPs continue to lead with solutions focused on security, data storage, and data analytics. Our most successful MSP partners are enabling their businesses to be more secure and to always access and analyze their data. The lifeblood of any business is its data, so it makes sense that securing it, backing it up, and analyzing it is most important to businesses.”


Supply chain attacks show why you should be wary of third-party providers

"Security is really only as good as the weakest link," says John Titmus, director of sales engineering EMEA at CrowdStrike, a security vendor. "Supply chain attacks are getting more widespread and growing in frequency and sophistication. You need to understand the nature of the risks and develop a security roadmap around it." Deep Root Analytics, a marketing firm used by the Republican National Committee, leaked the personal data of 200 million voters in 2017. This is a small company, that, according to its LinkedIn profile, has fewer than 50 employees. Deep Root Analytics accidentally put the data on a publicly accessible server. Larger service companies are also vulnerable. The Verizon breach, which involved six million customer records, was caused by Nice Systems, a provider of customer service analytics. Nice put six months of customer service call logs, which included account and personal information, on a public Amazon S3 storage server. Nice reports that it has 3,500 employees and provides services to more than 85% of Fortune 100 customers. Nice is tiny compared to Deloitte, an accounting firm with more than a quarter million employees.


How Apple’s pending search engine hints at a rise in enterprise search

For many years there’s been a gulf between the tools we use as consumers and the tools we use as knowledge workers. It’s often a source of frustration: why is it that at an internet search console, we can find the answer to the most obtuse and bizarre question we can possibly imagine in seconds through a couple of clicks of the mouse, yet it takes me ten minutes to find the document that Jack from Accounts sent me two weeks ago? The answer lies in the complexity of enterprise search as a function. Behind the veil of the easily accessible user interface, enterprise search is more complex than it appears and there are much greater technological challenges to be overcome, despite the visual similarities with internet search. For example, content online can easily be categorised by the number of clicks and views a page has received, in order for relevancy to be established, as traffic volumes are incredibly high. However, the document that Jack from Accounts sent is unlikely to have been opened anywhere near as much, so other technologies, such as natural language processing, need to be relied upon in order to understand the content of documents and recommend relevancy.



Quote for the day:

"Positive thinking will let you do everything better than negative thinking will." -- Zig Ziglar

Daily Tech Digest - December 16, 2020

5G and the sustainable future: a look to 2025

5G could boost economic resilience by providing a reliable second source of precise location and timing information, not only assisting global navigation satellite systems (GNSS) but complementing them. GNSS only work in “open sky” environments — they can’t provide information in “sheltered” settings such as tunnels or buildings. 5G delivers position and timing information that is just as accurate as GNSS but can do so while providing unbroken indoor and outdoor coverage. Broadband has become essential to our social lives and the world’s economies (both national and local), but there are still many “white spots” where no coverage exists. This digital divide must be addressed if more people are to access the economic opportunities and benefits that mobile connectivity provides. As future economic growth will depend less on basic connectivity and more on advanced service profiles, the harmful effects of these white spots will only become more acute. White spots in farmland hinder modern agricultural water management. An estimated 42 percent of the world’s accessible fresh water is lost through leaky irrigation systems, inefficient application methods and growing crops that are too “thirsty” for their environment.


SSO and MFA Are Only Half Your Identity Governance Strategy

The time employees spend waiting for access approval is paid time when they are not working. As mentioned, the time spent by IT staff entering or resetting passwords adds up. It's an unnecessary and costly allocation of resources. The crux of the problem is not only understanding which roles need access to which application assets but determining what is the right level of access. The faster this can be achieved with less human intervention, the greater the efficiency and cost-saving. New automated solutions that harness machine learning hold promise to help IT and security teams with smart recommendations about where to direct their efforts. Prioritization is essential when managing thousands or tens of thousands of identities. Lost in the sea of identities, it is easy for organizations to lose track of which permissions they have granted. This can lead to permission sprawl and unnecessary exposure. However, automated tracking of users, their roles, and the permissions granted to them can dramatically reduce the risk of unused entitlements that attackers can exploit to gain access to valuable assets. Permission management has a lot of catching up to reach the robustness and adoption of SSO-related tools.


Microservice Architecture and its 10 Most Important Design Patterns

Microservice Architecture is about splitting a large, complex systems vertically (per functional or business requirements) into smaller sub-systems which are processes (hence independently deployable) and these sub-systems communicates with each other via lightweight, language-agnostic network calls either synchronous (e.g. REST, gRPC) or asynchronous (via Messaging) way. ... In a Microservice Architecture, especially with Database per Microservice, the Microservices need to exchange data. For resilient, highly scalable, and fault-tolerant systems, they should communicate asynchronously by exchanging Events. In such a case, you may want to have Atomic operations, e.g., update the Database and send the message. If you have SQL databases and want to have distributed transactions for a high volume of data, you cannot use the two-phase locking (2PL) as it does not scale. If you use NoSQL Databases and want to have a distributed transaction, you cannot use 2PL as many NoSQL databases do not support two-phase locking. In such scenarios, use Event based Architecture with Event Sourcing. In traditional databases, the Business Entity with the current “state” is directly stored. In Event Sourcing, any state-changing event or other significant events are stored instead of the entities.


Researchers develop tool that automates device programming in the IoT

A new solution is now on the table, seeking to standardize the programming of event-driven architectures: the AsyncAPI specification. This specification allows users to define all the relevant information needed to design and run IoT devices in these environments. However, AsyncAPI is still in the early stages of development and therefore the tools that support it remain in short supply. Despite this, the researchers have developed a tool based on this new proposal that allows users to automate the creation of messages in the appropriate format, as well as the sending and receiving of these messages. Abel Gómez said: “Much of the work that goes into implementing a program for an IoT device involves creating messages in the format that subscribers to the channel expect and also “translating” messages from other devices in order to process the information. A large amount of code must therefore be programmed and, when done manually, this can be a source of errors.” The researcher continued: “By adopting this new tool, we can significantly shorten the amount of time needed to develop and launch programs, which favours interoperability, improves code quality and in turn limits the number of errors in the software development life cycle.


Covid-19 sparks boom in Middle East digital payments sector

“The biggest opportunities lie in getting everyone in the region into the financial system,” said Dannish, adding that the advent of open banking will allow application programming interfaces (APIs) to transform regional payment systems. “APIs enable innovation and allow businesses to perform in a better way,” he said. “There is so much growth still to be had in this market. The barriers to entry have been lowered since the pandemic.” However, Yusuf noted that fragmentation remains a major barrier to growth for the digital payments sector. “The region is split in terms of payment methods, policy and regulation, infrastructure and consumer preference,” he said. “There is also a generally fragmented landscape in terms of payments partners. Merchants often have to operate payments strategy at a granular level.” Yusuf added that cash as payment remains stubbornly popular in the Middle East. “Despite the momentum away from cash, if we look at the region compared to the rest of the world, it is still cash-centric and digital payments are only on the way to fulfilling their potential,” he said. “While Covid-19 and deep digital penetration are fostering change, our report showed that cash has deep-rooted social and cultural significance in the region, and that won’t be transformed overnight.”


Building stronger multicloud security: 3 key elements

The typical enterprise IT infrastructure and solutions stack today includes not only public and private cloud deployments but also an average of 288 different SaaS offerings, according to the 2020 SaaS Trends report from tech vendor Blissfully. (That’s in addition to legacy technologies in many cases, too.) These various elements have different security requirements as well as different levels and types of built-in security capabilities. Different cloud providers have different tools, they often use different terms for the same class of tools, and they have differing positions on their security responsibilities. All this leaves CISOs having to stitch together a cohesive whole that documents whether the cloud-provided security features are adequate, whether more security is needed, and where and what additional security measures are warranted. “Cloud was supposed to make our lives simpler, and it a lot of ways it does; it provides a lot of benefits. But from a security perspective it adds a lot of complexity because there’s so much to do,” says Garrett Bekker... Respondents to the 2020 Cloud Threat Report survey from Oracle and KPMG cited complexity as significant challenge, with 70% of respondents saying that too many specialized tools are required to secure their public cloud footprints and 78% highlighting the need for varying security policies and procedures between their cloud-resident and on-premises applications.


6 container concepts you need to understand

This architecture provides agility that is not feasible with VMs. Furthermore, containers support a more flexible model when it comes to compute and memory resources, and they allow resource-burst modes so that applications can consume more resources, when required, within the defined boundaries. In other words, containers provide scalability and flexibility that you cannot get from running an application on top of a VM. Containers make it easy to share and deploy applications on public or private clouds. More importantly, they provide consistency that helps operations and development teams reduce the complexity that comes with multi-platform deployment. Containers also enable a common set of building blocks that can be reused in any stage of development to recreate identical environments for development, testing, staging, and production, extending the concept of "write-once, deploy anywhere." Compared to virtualization, containers make it simpler to achieve flexibility, consistency, and the ability to deploy applications faster—the main principles of DevOps. Docker has become synonymous with containers. Docker revolutionized and popularized containers, even though the technology existed before Docker.


Concerns Run High as More Details of SolarWinds Hack Emerge

The security vendor has described UNC2452 as a threat actor that it has not encountered previously. FireEye has released indicators of compromise (IoCs) and signatures so organizations can detect the threat. But so far it has not publicly, at least, attributed the attack to any specific nation-state sponsor. Numerous media reports, however, have pinned the campaign on APT29, or Cozy Bear, a group thought to be associated with Russia's intelligence apparatus. Paul Prudhomme, cyber-threat intelligence analyst at IntSights, says his firm has so far not been able to corroborate or independently verify the claimed attribution to state-sponsored Russian cyber-espionage groups. "But we do nonetheless find the claim credible and worthy of further consideration," he says. The campaign is consistent with what IntSights has observed with state-sponsored Russian actors, including the targeting of the US government, the tight operational security, and the generally high level of sophistication and tradecraft involved. At the same time, "technology supply chain compromises of this kind are more typical of Chinese cyber-espionage groups than their Russian counterparts," Prudhomme says.


The state of AI in 2020: Biology and healthcare's AI moment

There's a ton of topics we could pick to dissect from Benaich and Hogarth's work, such as the use of PyTorch overtaking TensorFlow in research, the boom in federated learning, the analysis on talent and retainment per geography, progress (or lack thereof) in autonomous vehicles, AI chips, and AutoML. We encourage readers to dive into the report to learn more. But we wrap up with something different. Hogarth mentioned that the speculation phase in AI for biology and healthcare is starting, with lots of capital flowing. There are going to be some really amazing companies that come out of it, and we will start to see a real deployment phase kick in. But it's equally certain, he went on to add, there are going to be instances that will be revealed to be total frauds. So, what about AI ethics? Benaich and Hogarth cite work by pioneers in the field, touching upon issues such as commercial gender classification, unregulated police facial recognition, the ethics of algorithms, and regulating robots. For the most part, the report focuses on facial recognition. Facial recognition is widespread the world over and has lead to controversy, as well as wrongful arrests. More thoughtful approaches seem to gather steam, Benaich and Hogarth note.


Security automation: Time for a new playbook

Start by examining the processes and procedures your organization’s security team already has in place and identify the tasks that consume the majority of team member’s time. These will be the key use cases where SOAR can provide the most benefit by applying efficiency, speed and consistency. For example, in many organizations this might include processes such as looking up asset information or reviewing additional data points related to a security alert or a reported phishing email. It could be the process of pulling data on what’s running in memory on a device and adding that detail to an existing incident management ticket to assist in an investigative decision. Or it could be isolating hosts or blocking an IP range on the network in order to stop a threat from spreading. These are all common use cases that can be effectively automated, but only if the underlying processes and procedures are mature and well-defined. Different categories of automation require different levels of maturity in the underlying processes. If you plan to introduce any type of automated response – such as automated threat containment – you must be absolutely certain that the underlying processes are mature, or it could have a greater than intended impact the availability of systems and people.



Quote for the day:

"People with good intentions make promises, but people with good character keep them." -- Joubert Botha

Daily Tech Digest - December 15, 2020

Blockchain Vs Relational Database: What’s The Difference?

So, what is blockchain technology? Well, it’s a ledger system that is decentralized and distributed. More so, it also offers data integrity, transparency, and so on. In simple terms, blockchain would be connected in a chain-like format. It means that any data in the ledger will take on a chain-like structure. So, just imagine the structure of blocks that are interlinked together. Furthermore, a block will be linked to the previous and after blocks. As a result, all the blocks create a chain of blocks, thus the name. More so, every single block on the ledger will have data or information about the transaction. So, what about the security of those transactional data? Well, every single block will be cryptographically encrypted. Another cool thing about blockchain is that it will have a cryptographic Hash ID that no one can reverse engineer. You might think blockchain as a database that just stores information. However, the difference is immense. In reality, both of them are quite different, and we’ll get into that shortly in the blockchain vs relational database comparison. Blockchain is, by default, immutable. So, it means that no one can modify any form of data whatsoever. Thus, any information that gets into the system once can never be altered or deleted. As a result, it will stay in the ledger forever.


6 Cloud Native Do’s and Don’ts for Developers

It’s easy to get so caught up in the question of what technologies you’re using, that you forget why you’re using them in the first place. But remember that adopting cloud infrastructure — whether it’s a Kubernetes cluster in your own data center, or serverless API in the public cloud — isn’t the goal. The goal is to help your organization build more scalable and flexible applications and to do it quicker. If you’re not actually taking into account the advantages and disadvantages of cloud infrastructure when you build applications, there’s a good chance you’re not actually meeting your organization’s real goals. ... Nodes crash. Networks fail. Remote APIs give unexpected results. Cloud native development requires you to handle these problems gracefully. Applications need to give users some sort of response, even if a component, or several components, are broken or non-responsive. You also need to think about how to recover once the broken or unavailable component is working again. Check out the Reactive Principles for additional guidance and techniques for getting started. ... Cloud native applications have unique compliance and security challenges.


Security considerations for OTA software updates for IOT gateway devices

Security is a process and a mindset. There is no magic switch we can toggle to make a system secure. It is important to stay vigilant, reviewing existing security flaws, and adapting to your workflow to account for them. New classes of attacks appear seemingly every day and engineering teams must prepare for this in order to remain secure. The white hats have to get it right every time while the black hats only need to get it right once. You need to identify what resources are worthy of being protected. A database of weather readings is unlikely to contain proprietary information whereas a customer database most certainly is. You will want to tailor the security to match the severity of a breach. The objective of most security devices is to increase the cost of an attack or reduce the value of any successful breaches. It is important to realize that the OTA update system is generally only concerned with potential attacks and vulnerabilities to the update process itself. It does not provide any protection against attacks that happened outside of the update change. For these kinds of attacks, you need to rely on other components provided by your operating system. One extremely important general security consideration is the principle of least privilege.


Microsoft and the State of Quantum: Q&A With Mariia Mykhailova

The existing quantum hardware is just not mature enough to run quantum algorithms to solve real-world problems, both in terms of the number of qubits in the devices and their quality. However, quantum computing can have impact today – it just requires some extra creativity! We call these solutions “quantum-inspired algorithms” – algorithms that were developed with quantum processes in mind but run on classical hardware. ... Microsoft Quantum’s mission is to develop a scalable and open quantum system and ecosystem around it. This means that we’re working on building a full stack quantum system, and that stack has a lot of layers. Some of these get a lot of publicity, such as Microsoft Quantum Development Kit or the quantum hardware and the fundamental physics research required to implement our vision for it, the topological qubits. But there are other, less known but not less important layers of the stack between these two, such as qubit control technology that has to support scaling quantum systems to millions of qubits, way beyond the physical limitations of current systems. That being said, solving world’s intractable problems is certainly not a single-company effort!


An Introduction to Blockchain + NoSQL Databases

Despite the benefits, distributed computing is not pervasive; even within modern enterprises centralization of many systems is still quite common. This includes industries that you would expect to be designed with more resiliency in mind, like the global financial systems or supply chain management which have tended to be more centralized around mainframe computing. By the way, you can always tell when there is a centralized system because when it fails, it fails absolutely! When all data or services are running on a single machine it is quite easy to know when it goes down because everything completely stops. It may be because it takes time to start up a replacement machine, or takes time to notice a failure before re-routing users or a myriad of other devastating engineering reasons. A centralized system is the opposite of the peer-to-peer networks we aspire to. However, with the introduction of platforms like Bitcoin, the next generation of digital currency and “ledgers” are slowly being proven out. Now there are thousands of different cryptocurrencies and dozens of Blockchain backends that are taking advantage of decentralized technology. As an aside, note that “distributed ledger” does not equate to the proof-of-work scenarios that many cryptocurrencies use.


Ethical design thinking: empowering designers to drive ethical change

Designers have started to recognise that some of what they have created is harming people. They are now starting to look at the use of technology and its impact in the long term, with ethical design at the centre of their thinking. Despite their motivation, companies have accepted that AI bias exists and are changing how they harvest and use people’s data — and designers are central to this change in strategy. “The core is really around pivoting from what can be done, with the designer coming in at a later stage, to thinking about what should be done, with the designer coming in at the beginning of the process,” says Woodley. “The designer represents the human. They create what is consumed by the person and so they should be ones influencing the line between what the business wants, what is possible from a technology perspective and what is responsible from an ethical perspective,” she continues. Design thinking, starting with empathy or the understanding of the human, needs to be at the forefront of future technology innovations and services. We need to flip the current model. Instead of leveraging technology to achieve business goals without taking the human impact into consideration, we need to put the human at the centre of our technology endeavours.


What’s at stake in the Computer Fraud and Abuse Act (CFAA)

Intended as the United States’ first anti-hacking law, the CFAA was enacted almost thirty-five years ago, long before lawyers and technologists had any sense of how the Internet would proliferate and evolve. In fact, the Act is outdated enough that it specifically excludes typewriters and portable hand-held calculators as a type of computer. Since its inception, it has been robustly applied for basic terms and services breaches, like the infamous case of Aaron Swartz downloading articles from the digital library JSTOR, to indicting nation-state hackers and extraditing Julian Assange. The core of the problem lies in the vague, perhaps even draconian, description of “unauthorized” computer use. While the law has been amended several times, including to clarify the definition of a protected computer, the ambiguity of unauthorized access puts the average consumer at risk of breaking federal law. According to the Ninth Circuit, you could potentially be committing a felony by sharing subscription passwords. The stakes are particularly high for security researchers who identify vulnerabilities for companies without safe harbor or bug bounty programs. White-hat hackers, who act in good faith to report vulnerabilities to a company before it is breached, face the same legal risks as cybercriminals who actively exploit and profit from those vulnerabilities.


Take any open source project — its contributors cut across national, religious and racial lines

“Open source is not all technical, and there is a strong community angle to this. During my college days, I’ve been involved in small ways with local user groups, where I used to conduct classes and tutorials on various topics. Once I moved to Bengaluru to work, I got heavily involved in the Python community and organised the first Python conference in India in 2009. PyCon India was probably one of the first language-specific tech conferences in India, and it has since then grown to be one of the largest PyCons in the world. This year, due to the coronavirus situation, we’re conducting the conference online. I’m also an active contributor to the Stack Overflow website, where I rank among the top 0.29 per cent worldwide for giving answers to questions.” Ibrahim feels that a lot of people don’t seem to realise that contributing something significant to a project requires a large amount of work. The main challenge is to develop patience and dedication to spend enough time to understand a project so that one can contribute to it. There are smaller problems, like some projects do not have enough contributors to help with technical problems, but overall, the main problem is the lack of discipline to put in the time necessary to achieve some level of proficiency.


Hear the Music Behind Your Data

When faced with the troves of data piling up daily, companies can become quickly overwhelmed. They’re unsure of where to begin an analysis for connections between data points. Data science is about exploring and seeking patterns within data, so it plays a pivotal role in getting companies started with their analyses. Oftentimes, data scientists won’t even know the question before they explore; instead, they’ll use their technology to identify emerging trends and patterns. Capturing and interpreting those patterns can provide tremendous benefits to a company. For example, data can help you catch bots that sign up and then spam your product. Human interaction with a product produces certain patterns — behavior forms a shape. You can compare that behavior shape to potentially anomalous datasets and determine if a user is human or not. That gives your team confidence in disconnecting potential bots from your system, which can save a fair amount of server space and money. Music is all about patterns, too. Composing a musical piece requires understanding how notes and the spaces between them all fit together to create cohesive patterns. Every song you’ve ever heard has a particular waveform derived from unique patterns of notes and spaces.


The Private Sector Needs a Cybersecurity Transformation

Fundamentally, the current approach to security is focused on the past — even if it's just a few milliseconds ago. Identifying a threat that already occurred and stopping the next one is not protection. And with the advances in technology available today, it should not be the accepted protocol for our industry. When a time-consuming analysis results in the conclusion of "we can block this attack next time," you are nowhere close to secure. Simply put, this approach does nothing to account for the agile adversaries that we know exist. Staying agile in this fight means looking forward, not back. For that to be a reality however, time plays a crucial role. Research from Ponemon Institute shows that security teams spend at least 25% of their time chasing false positives. I'd argue it's even higher. Defense cannot continue to be about uncovering the threats that have already happened while trying to block them again. Time has to be spent on truly preventing what's coming next. ... While hygiene is important, there is very little prevention going on at the threat level. Well-meaning employees have been stretched so thin that they find post-event response acceptable and equate it to cybersecurity. Sometimes hygiene equates to patching, but often there is a good reason why you can't patch.



Quote for the day:

“The real voyage of discovery consists not in seeking new landscapes but in having new eyes.” -- Marcel Proust