Daily Tech Digest - July 31, 2020

5 Must-Have Skills For Remote Work

When teams work remotely, at least half of all communication is done via writing rather than speaking. This means communicating through emails, Slack, or texting. It even applies to using the chat function while you’re on a video call. You need to be able to communicate clearly no matter what platform you’re using. ... Working remotely doesn’t mean working alone. You’re still going to be part of a team, which means working with colleagues on projects and tasks. Without a physical space to gather, collaboration can be a bit more challenging. Communication skills and collaboration skills go hand in hand, as communication plays a huge role in successful collaboration. Find the right balance of video meetings, phone calls, and messages to ensure ample but not overwhelming communication. ... You might be working with colleagues who are in a different time zone which impacts deadlines, when meetings can be scheduled, and even when you can get in touch with those colleagues. If you’re assigned to work with a new team, you might have to adapt to the way that team works.


How to secure your project with one of the world’s top open source tools

Dynamic application security testing (DAST) is a highly effective way to find certain types of vulnerabilities, like cross site scripting (XSS) and SQL injection (SQLi). However many of the commercial DAST tools are expensive to use and often only used when a project is getting ready to ship, if they are used at all. ZAP can be integrated into a project’s CI/CD pipeline from the start, ensuring that many common vulnerabilities are detected and can be fixed very early on in the project lifecycle. Testing in development also means that you can avoid the need to handle tools and features designed to make automation difficult, like single sign-on (SSO) and web application firewalls (WAFs). ... For web applications, or any projects that provide a web based interface, you can use ZAP or another DAST tool. But don’t forget to use static application security testing (SAST) tools as well. These are particularly useful if they are introduced when starting a project. If SAST tools are used against more mature projects then they often flag a large number of potential issues, which makes it difficult to focus on the most critical ones.


Using the Attack Cycle to Up Your Security Game

Attack sophistication is directly proportional to the goals of the attackers and the defensive posture of the target. A ransomware ring will target the least-well-defended and the most likely to pay (ironically, cyber insurance can create a perverse incentive in some situations.) because there is an opportunity cost and return on investment calculation for every attack. A nation-state actor seeking breakthrough biotech intellectual property will be patient and well-capitalized, developing new zero-day exploits as they launch a concerted effort to penetrate a network's secrets.  One of the most famous of these attacks, Stuxnet, exploited vulnerabilities in SCADA systems to cripple Iran's nuclear program. The attack was thought to have penetrated the air gap network via infected USB thumb drives. As awareness of these complex, multi-stage attacks has risen, startups have increased innovation - such as the behavior analytics space where complex machine-learning algorithms determine "normal" behaviors and look for that one bad actor. Threat actors are the individuals and organizations engaged in the actual attack. In the broadest sense of the term, they are not always malicious. 


The FI and fintech opportunity with open banking

What’s different now is that over the last two or three years the industry has come together to collaborate on evolving the ecosystem. One example is the formation of an industry group called the Financial Data Exchange. As a result, financial institutions, financial data aggregators, and related parties are developing standards for access, authentication, and transparency that will provide end-to-end governance to keep the ecosystem safe and fair, and consumer data secure. ... “Banks are looking for technology innovation to address both back office challenges, get faster and leaner, reduce costs, but also to increase engagement with their customers,” Costello says. “Certainly at times like this we see how important digital engagement is.” As some FIs are closing branches to reduce costs, digital engagement becomes essential. And if it’s done right, it works. And the opportunity for innovation abounds. The better multi-factor authentication and authorization that comes with open banking means that the bank has a higher degree of confidence that the person with whom they’re engaging is the account holder. Now that they have a higher degree of trust, they can offer a higher degree of engagement.


Reduced cost, responsive apps from micro front-end architecture

Early micro front-end projects have focused on how to provide better separation of logic and UI elements into smaller, more dynamic components. But modern micro front ends have moved far beyond the idea of loose coupling code to full scale Kubernetes-based deployment. There's even been a recent trend of micro front ends containerized as microservices and delivered directly to the client. For example, the H2 app by Glofox recently adopted this approach to implement a PaaS for health and fitness apps, which gyms and health clubs then customize and provide to clients. The app uses the edgeSDK from Mimik Technology Inc., to manage the containerized micro front-end microservices deployment to run natively across iOS, Android and Windows devices. In addition, a micro front-end deployment reduces the server load. It only consumes client-side resources, which improves response times in apps vulnerable to latency issues. Users once had to connect to databases or remote servers for most functions, but a micro front end greatly reduces that dependency. 


8 Tips for Crafting Ransomware Defenses and Responses

For any attack that involves ransomware, the fallout can be much more extensive than simply dealing with the malware. And organizations that don't quickly see the big picture will struggle to recover as quickly and cost-effectively as they might otherwise be able to do (see: Ransomware + Exfiltration + Leaks = Data Breach). That's why understanding not just what ransomware attackers did inside a network, but what they might still be capable of doing - inside the network, as well as by leaking - is an essential part of any incident response plan, security experts say. So too is identifying how intruders got in - or might still get in - and ensure those weaknesses cannot be exploited again, says Alan Brill, senior managing director in Kroll's cyber risk practice. "If you don't lock it down, it's very simple: You're still vulnerable," he tells Information Security Media Group. "If you lock down what you thought was the issue but you were wrong - it wasn't the issue - that they weren't just putting ransomware in your system but they've been in there for a month examining your system, exfiltrating data and lining up how to do the most damage when they launched the ransomware, you may not even know what happened."


We've forgotten the most important thing about AI. It's time to remember it again

Leufer has just put the final touches to a new project to debunk common AI myths, which he has been working on since he received his Mozilla fellowship – an award designed for web activists and technology policy experts. And one of the most pervasive of those myths is that AI systems can and act of their own accord, without supervision from humans. It certainly doesn't help that artificial intelligence is often associated with humanoid robots, suggesting that the technology can match human brains. An AI system deployed, say, to automate insurance claims, is very unlikely to come in the form of a human-looking robot, and yet that is often the portrayal that is made of the technology, regardless of its application.  Leufer calls those "inappropriate robots", often shown carrying out human tasks that would never be necessary for an automaton. Among the most common offenders feature robots typing on keyboards and robots wearing headphones or using laptops. The powers we ascribe to AI as a result even have legal ramifications: there is an ongoing debate about whether an AI system should own intellectual property, or whether automatons should be granted citizenship. 


Scaling Distributed Teams by Drawing Parallels from Distributed Systems

The biggest bottleneck for any distributed team is decision-making. Similar to distributed systems, if we apply “deliver accountability and receive autonomy,” the bottleneck is removed eventually. For this to happen, there should be a lot of transparency and information sharing. So the teams and individuals are enabled to make decisions independently. Clarity is harder with a distributed team. Distributed systems send heartbeats very frequently and detailed reports at a lesser frequency. Communication is the key. Distributed standups are a better way of determining progress. Apart from that, move one-to-one conversations and decision-making to a common channel. We tried a concept called the end of the day update. Everyone posts their progress at the end of their day (considering different time zones). We believe it gives a better view of what each person is working on and the overall progress, even before they come to standups. At EverestEngineering, the coaches are responsible for improving the health of the channel. A healthy distributed team has a lot of discussions on slack channels and quick calls. You can see a lot of decisions made in the channel. There are enough reactions and threads for a question.


How to build a quantum workforce

The growth means companies are looking to hire applicants for quantum computing jobs and that the country needs to build a quantum workforce. Efforts are underway; earlier this month, more than 5,000 students around the world applied to IBM's Qiskit Global Summer School for future quantum software developers. And the National Science Foundation and White House Office of Science and Technology Policy held a workshop in March designed to identify essential concepts to help students engage with quantum information science (QIS). But industry experts speaking on the topic during an IBM virtual roundtable Wednesday said K-12 students are not being prepared to go to schools with the requisite curriculum to work in this industry. Academia and industry must work in tandem to engage the broadest number of students to get them prepared to do these kinds of jobs that will be needed in the future, said Jeffrey Hammond, vice president and principal analyst at Forrester Research, who moderated the discussion. It was only four years ago that quantum computing became available in the cloud, giving more people access, noted panelist Abe Asfaw, global lead of quantum education at IBM Quantum.


A Developer-Centric Approach to Modern Edge Data Management

A substantial majority of embedded developers in the IoT and complex instrumentation space use C, C++, or C# to handle data processing and local analytics. That’s in part because of how easy it is to handle direct I/O for devices and internal systems components as well as more complex digitally-enhanced machinery through some variations of inp() and outp() statements. It’s also easy to manipulate collected data using familiar file system statements such as fopen(), fclose(), fread(), and fwrite(). This is the path of least resistance. Almost anyone who takes a programming class (or just takes the time to learn how) can use these statements to interact with data at the file system level. The problem is that file systems are very simple. They don’t do much by themselves. When it comes down to document and record management, indexing, sorting, creating and managing tables, and so on, there’s only one operative statement: DoItYourself(). And we’re not even talking about rare or rocket science-level activities, here. These are everyday activities that that you’d find in any database system. Wait! It’s the D-word! May as well be the increment of the ASCII character pointer by two to the … you know what word.



Quote for the day:

"If you have no confidence in self, you are twice defeated in the race of life." -- Marcus Garvey

Daily Tech Digest - July 30, 2020

The Challenges of Building a Reliable Real-Time Event-Driven Ecosystem

Building a dependable event-driven architecture is by no means an easy feat. There is an entire array of engineering challenges you will have to face and decisions you will have to make. Among them, protocol fragmentation and choosing the right subscription model (client-initiated or server-initiated) for your specific use case are some of the most pressing things you need to consider. While traditional REST APIs all use HTTP as the transport and protocol layer, the situation is much more complex when it comes to event-driven APIs. You can choose between multiple different protocols. Options include the simple webhook, the newer WebSub, popular open protocols such as WebSockets, MQTT or SSE, or even streaming protocols, such as Kafka. This diversity can be a double-edged sword—on one hand, you aren’t restricted to only one protocol; on the other hand, you need to select the best one for your use case, which adds an additional layer of engineering complexity. Besides choosing a protocol, you also have to think about subscription models: server-initiated (push-based) or client-initiated (pull-based). Note that some protocols can be used with both models, while some protocols only support one of the two subscription approaches. Of course, this brings even more engineering complexity to the table.


Successful Digital Transformation Requires a Dual-track Approach

This first part of the dual-track approach focuses on the identification and implementation of new digital tech throughout an organization, while also working to change cultures and business workflows impacted by the transformation, according to the report. While this step is critical, it is also complex and time consuming. The benefits may take time to come to fruition, which is why many executives are dissatisfied with current transformation results. Not only are executives impatient, but they don't have the second part of the dual-track to get them by, the report found. The second portion is a parallel track that hones in on areas overlooked in large-scale transformation tactics. These areas include the organization's ability to quickly connect and modernize hundreds of crucial processes that cross both business workflows and work groups, according to the report. This goal can be achieved through rapid-cycle innovation, which encourages business professionals outside of IT to propose and create new apps for updating existing workflow processes, with the goal of achieving quick wins for the company and supporting long-term transformation, the report found.


How deploying new-age technologies has changed the role of leadership amid COVID-19

Circumstances created by a pandemic, such as COVID-19 have been hugely disruptive and could even render organizations paralytic, if they are far removed from any understanding of how technology is an imperative and not optional add on. This is why it is critical to have a proactive mindset to technology, instead of a reactive approach. Proactive investment in technology is helping organizations reap maximum benefits as this approach allows leaders to prepare their people to embrace and become comfortable in using technology, so that it becomes spontaneously embedded in an organization at a fundamental level. The investments we proactively made many years ago, whether in secure virtual platforms or AI driven due diligence processes that help automate how we finalize our contracts, has helped us seamlessly adapt to working with minimum disruption. The biggest asset has been the spontaneous comfort level of our people in adapting to this transformed scenario of working from home, due to their prior high degree of familiarity with using technology platforms and processes at work over the past many few years, ensuring our ability to optimize productivity.


Anatomy of a Breach: Criminal Data Brokers Hit Dave

At the moment, however, some evidence points to ShinyHunters having phished Dave employees. The group has previously advertised - and has been suspected of being behind - the sale of millions of stolen records obtained from Indonesian e-commerce firm Tokopedia, Indian online learning platform Unacademy, Chicago-based meal delivery outfit HomeChef, online printing and photo store ChatBooks, university news site Chronicle.com, as well as Microsoft's private GitHub repositories, according to Baltimore-based security firm ZeroFox. How does ShinyHunters steal so much data? Cyble says that in a post to a hacking forum, a user called "Sheep" says of the Dave breach: "This database was dumped through sending GitHub phishing emails to Dave.com employees. The employees were found by searching for developers in the organization on LinkedIn/Crunchbase/Angel. All of the databases sold by ShinyHunters were obtained through this method. In some cases, [the] same method was used but for GitLab, Slack and Bitbucket."


IoT Security: How to Search for Vulnerable Connected Devices

Researchers offer many tools and ways to search for hacker-friendly IoT devices. The most effective methods have already been tested by botnet creators. In general, the use of certain vulnerabilities by botnets is the most reliable criterion for assessing the level of security of IoT devices and the possibilities of their mass exploitation. Searching for vulnerabilities, some attackers rely on the firmware (in particular, those errors that were discovered during firmware analysis using reverse engineering methods). Other attackers start looking for vulnerabilities searching for the manufacturer’s name. In any case, for a successful search, some kind of a distinctive feature of a vulnerable device is needed, and it would be nice to find several such features. ... There are really many vulnerabilities in IoT devices, but not all of them are easy to exploit. Some vulnerabilities require a physical connection, being near or on the same local network. The use of others is complicated by quick security patches. On the other hand, manufacturers are in no hurry to patch firmware and often admit it. Getting an accurate list of vulnerable IoT devices will require significant efforts, it is not just a one-time query.


Security: This nasty surprise could be waiting for retailers when they open up again

"A lot of retailers, when they come back online, they're going to be focused on business processes and getting employees back to work. They're not necessarily thinking, 'maybe I need to update Windows on my computer terminal', or update POS terminal firmware." In retail, where surges in online transactions during the pandemic have forced retailers to quickly transform their ecommerce capabilities, hackers have shifted their focus to make the most of this opportunity. This includes changing-up well-known types of attacks by using them in different ways, such as exploiting credit cards within a different type of merchant platform, and targeting parts of retailers' systems that might otherwise slip through the cracks. We've already seen new forms of attacks on retailers take place during the pandemic. In late June, researchers at security software firm Malwarebytes identified a new web-skimming attack , whereby cybercriminals concealed malware on ecommerce sites that would steal information typed into the payment input fields, including customers' names, address and card details.


Finland government funds work on potential quantum leap

The Finnish government has allocated €20.7m to the venture, which will be run as an innovation partnership open to international bidding. Closer to home, VTT-TRCF plans to cooperate with Finnish companies across the IT and industrial sphere during the various phases of the project’s implementation and application. The rapid advances in quantum technology and computing have the potential to provide societies with the tools to overcome major future problems and challenges, such as the Covid-19 pandemic, that remain out of the reach of contemporary supercomputers. Quantum technologies have the potential to complete complex calculations, which currently take days to do, orders of magnitude quicker. Making calculations that traditional computers are fundamentally unable to do, if practical, they would mark a leap forward in computing capability far greater than that from the abacus to a modern computer. Antti Vasara, the CEO of VTT-TRCF said: “The quantum computers of the future will be able to accurately model viruses and pharmaceuticals, or design new materials in a way that is impossible with traditional methods.”


What the CCPA means for content security

Simply installing an ECM system will not yield a secure content ecosystem. If there is one thing that all ECM experts agree on, it's installing an ECM system will accomplish nothing aside from consuming resources. People need to use the system to manage content -- and want to use it -- even after setting up the necessary security controls to meet the requirements of the CCPA. Deploying an ECM system that is so secure that people do not want to use it is a waste of resources. The ECM system does not need to be complicated. Setting up a secure desktop sync of content is an important first step in ease of use and adoption. Instead of just rolling it out, companies need to work with each group using the software first. The business must help users organize their content and set up a basic structure for storing content so that the system doesn't become disorganized. Depending on the system that a business is using, setting up a basic structure may include a basic taxonomy, content types, standard metadata or a combination of any of these. If a business implements its ECM system correctly, its largest challenge will be securing mobile devices and laptops. 


How blockchain could play a relevant role in putting Covid-19 behind us

Covid-19 has revealed the weaknesses of global supply chains with countless reports of PPE issues, a lack of food in impoverished areas, and a breakdown of business-as-normal, even in places where demand has remained constant. Trust has always been the keystone of trade. But how can you trust supply chain partners to deliver in times of widespread failure? Owing to its decentralised nature, blockchain-based applications create a transparent ecosystem when you trust — and see — that the mechanisms in place are fair to all. It can provide instant overviews of entire supply chains to highlight issues as soon as they arrive. What’s more, it is possible to implement live failsafes with smart contracts that can ensure the smooth continuation of the supply chain and remove the very need for trust in the first place. To this end, the World Economic Forum developed the Blockchain Deployment Toolkit, a set of high-level guidelines to help companies implement best practices across blockchain projects – especially those helping solve supply chain issues. They worked with more than 100 organisations for more than a year, delving into 40 different blockchain use cases, including traceability and automation, to help guide organisations in their efforts to solve real-world problems with blockchain.


The growing trend of digitization in commercial banking

“Technology has absolutely been at the forefront of all the changes we have seen and will see in upcoming years,” explained Rao. Even so, the business of banking has not changed on a fundamental level. Rather, products have become more commoditized; similar business products are being offered, but customers are using them in different ways. In Rao’s words, “the ‘what’ component has not changed, but the ‘how’ has.” This is where digitization has had the biggest impact. For example, commercial banking capabilities like making a payment or collecting a receivable have long been available for corporate entities. But today, the same capability can be offered in a way that emphasizes a great user experience—something that hasn’t always been a focal area in the commercial banking space. ... Large traditional banks are frequently riddled with outdated legacy systems on the back end of operations, which dilutes their offerings even with modern digital technology at the front end. These legacy systems make it costly to create the ideal customer experience, leading many banks to focus on implementing strategies that pave the path towards modernization. In certain cases, this means opening up and modernizing selective pieces of back-end systems to improve operations overall.



Quote for the day:

"Leadership has a harder job to do than just choose sides. It must bring sides together." --

Daily Tech Digest - July 29, 2020

When ‘quick wins’ in data science add up to a long fail

The nature of the quick win is that it does not require any significant overhaul of business processes. That’s what makes it quick. But a consequence of this is that the quick win will not result in a different way of doing business. People will be doing the same things they’ve always done, but perhaps a little better. For example, suppose Bob has been operating a successful chain of lemonade stands. Bob opens a stand, sells some lemonade, and eventually picks the next location to open. Now suppose that Bob hires a data scientist named Alice. For their quick win project, Alice decides to use data science models to identify the best locations for opening lemonade stands. Alice does a great job, Bob uses her results to choose new locations, and the business sees a healthy boost in profit. What could possibly be the problem? Notice that nothing in the day-to-day operations of the lemonade stands has changed as a result of Alice’s work. Although she’s demonstrated some of the value of data science, an employee of the lemonade stand business wouldn’t necessarily notice any changes. It’s not as if she’s optimized their supply chain, or modified how they interact with customers, or customized the lemonade recipe for specific neighborhoods.


How New Hardware Can Drastically Reduce the Power Consumption of Artificial Intelligence

Currently, AI calculations are mainly performed on graphical processors (GPUs). These processors are not specially designed for this kind of calculations, but their architecture turned out to be well suited for it. Due to the wide availability of GPUs, neural networks took off. In recent years, processors have also been developed to specifically accelerate AI calculations (such as Google’s Tensor Processing Units – TPUs). These processors can perform more calculations per second than GPUs, while consuming the same amount of energy. Other systems, on the other hand, use FPGAs, which consume less energy but also calculate much less quickly. If you compare the ratio between calculation speed and energy consumption, the ASIC, a competitor of the FPGA, scores best. Figure 1 compares the speed and energy consumption of different components. The ratio between both, the energy performance, is expressed in TOPS/W (tera operations per second per Watt or the number of trillion calculations you can perform per unit of energy). However, in order to drastically increase energy efficiency from 1 TOPS/W to 10 000 TOPS/W, completely new technology is needed.


Maintaining Business Continuity With Proper IT Infrastructure And Security Tools A Challenge For IT Pros

Business continuity plans are integral to companies’ ability to withstand an unanticipated crisis. 86% of companies have a business continuity plan in place prior to COVID-19, 12% of respondents have minimal or no confidence at all in their organization’s plan to withstand an unanticipated crisis; only 35% of respondents feel very confident in their plan, according to the LogicMonitor study.  IT decision makers also expressed overall reservations about their IT infrastructure’s resilience in the face of a crisis. Globally, only 36% of IT decision makers feel that their infrastructure is very prepared to withstand a crisis. And while a majority of respondents (53%) are at least somewhat prepared to take on an unexpected IT emergency, 11% feel they are minimally prepared or believe their infrastructure will collapse under pressure. 84% of global IT leaders are responsible for ensuring their customers’ digital experience, but nearly two-thirds (61%) do not have high confidence in their ability to do so, according to the LogicMonitor’s study. The study further revealed more than half (54%) of IT leaders experienced initial IT disruptions or outages with their existing software, productivity, or collaboration tools as a result of shifting to remote work in the first half of 2020.


Why blockchain-powered companies should target a niche audience

The main problem lies in the fact that blockchain technology has a vast array of potential applications. So it’s all too easy to have an overly broad value proposition. But this creates a lack of clarity and precision, which can, in turn, drive customers away. A generic ‘go-to’ marketing strategy for a tech company often fails to take into account how the adoption of new technology works. This is especially the case with blockchain-powered companies because they usually involve complex partnerships with financial institutions. When marketing a technology company, the focus on technology potential often distracts from a solid Minimum Viable Product (MVP), and so results in a generic go-to-market strategy. It’s a case of what I call ‘CTO-led startup’, a scenario where the founder may have incredibly deep technological skill and knowledge, but forgets that they need to wear two hats: that of a tech builder, and that of a visionary CEO. As an ‘enabling technology’, the CEO of a blockchain-dependant business can potentially have a near unlimited vision for the company. So it can feel counterintuitive to zero in on a singular laser focus when building the brand, because it superficially seems like a reductive strategy when compared to the broader vision.


Why Data Science Isn't an Exact Science

In fact, there are several reasons why data science isn't an exact science, some of which are described below. "When we're doing data science effectively, we're using statistics to model the real world, and it's not clear that the statistical models we develop accurately describe what's going on in the real world," said Ben Moseley, associate professor of operations research at Carnegie Mellon University's Tepper School of Business. "We might define some probability distribution, but it isn't even clear the world acts according to some probability distribution." ... If you lack some of the data you need, then the results will be inaccurate because the data doesn't accurately represent what you're trying to measure. You may be able to get the data from an external source but bear in mind that third-party data may also suffer from quality problems. A current example is COVID-19 data, which is recorded and reported differently by different sources. "If you don't give me good data, it doesn't matter how much of that data you give me. I'm never going to extract what you want out of it," said Moseley.


Artificial Intelligence Loses Some Of Its Edginess, But Is Poised To Take Off

“It appears that AI’s early adopter phase is ending; the market is now moving into the ‘early majority’ chapter of this maturing set of technologies,” write Beena Ammanath, David Jarvis and Susanne Hupfer, all with Deloitte, in their most recent analysis of the enterprise AI space. “Early-mover advantage may fade soon. As adoption becomes ubiquitous, AI-powered organizations may have to work harder to maintain an edge over their industry peers.” ... “This could mean that companies are using AI for IT-related applications such as analyzing IT infrastructure for anomalies, automating repetitive maintenance tasks, or guiding the work of technical support teams,” Ammanath and her co-authors note. Tellingly, business functions such as marketing, human resources, legal, and procurement ranked at the bottom of the list of AI-driven functions. An area that needs work is finding or preparing individuals to work with AI systems. Fewer than half of executives (45%) say they have “a high level of skill around integrating AI technology into their existing IT environments,” the survey shows. 



DevOps engineers: Common misconceptions about the role

Rather than planning to evolve the role of DevOps engineer, identify those people within the IT team — those who’ve been in development, architecture, system engineering, or operations for a few years but who also have the soft skills needed to both pitch ideas and deliver on them. DevOps engineers should focus on problem-solving skills and on their ability to increase efficiency, save time, and automate manual processes – and above all, to care about those who use their deliverables. Workplace disruption has happened. Communication has proven to be sometimes challenging in our virtual world. Projects stalled by this disruption must be restarted. There is already a skills and gender gap within IT. The DevOps engineer must become one of what the DevOps Institute calls “the Humans of DevOps.” These are engineers who have people skills along with process and technology skills. Learning among team members as well as within the enterprise is paramount, and there has never been a better time to do it. Consider whether a DevOps engineer has the soft skills to facilitate the learning of team members and to continuously transform the team according to the needs of the business.


The Hacker Battle for Home Routers

Trend Micro says that four years after the Mirai botnet, the landscape is more competitive than ever. "Ordinary internet users have no idea that this war is happening inside their own homes and how it is affecting them, which makes this issue all the more concerning," according to a new Trend Micro report, which was co-authored by Stephen Hilt, Fernando Mercês, Mayra Rosario and David Sancho. Botnet code running on a device can diminish bandwidth. It could also mean connectivity problems. If security solutions flag a device as being part of a botnet, certain services may be inaccessible. At worst, if a router is being used as a proxy for crime, the owner of the device could be blamed. With many workers still working from home during the pandemic, there's also a worry about how such infections could potentially affect enterprises as well. Throughout 2019 and into this year, Trend Micro says, its telemetry detected a rising number of brute-force attempts to infect routers, which involve trying various combinations of login credentials. The company suspects the attempts came from other routers.


The 'magic' of open source: better, faster, cheaper -- and trustworthy

Although open-source software has been available for decades, governments at all levels are seeing the benefits of embracing it to better deliver services to the public, a new report states. Those benefits include improving efficiency, lowering costs, improving trust, increasing transparency and reducing vendor lock-in, according to “Building and Reusing Open Source Tools for Government,” released this month by think tank New America. What’s more, open source allows for collaboration so that government entities with common problems don’t have to reinvent the wheel to solve them. For instance, the United Kingdom’s Government Digital Service’s Notify communications management platform is available as open source, and the government of Canada adapted it last year to fit its own needs, such as modifying it to support multiple languages, the report states. In California, the Government Operations Agency tasked a team to rethink how residents access information online. One of the 20-odd prototypes developed was used by another team to stand up an unemployment insurance application within Covid19.ca.gov, a website created to provide pandemic information, said Angelica Quirarte


COVID-19 has disrupted cybersecurity, too – here's how businesses can decrease their risk

To eep enterprises running, businesses must secure remote access and collaboration services, step up anti-phishing efforts and strengthen business continuity. Businesses need to establish a culture of robust cyber hygiene, by providing resources to the workforce and managing access and monitoring activity on critical assets. ... Not all organisations understand their security posture and the effectiveness of security controls. As a result, they don’t make the right decisions or prioritise the correct actions, which leaves the enterprise open to attack and compromise. Securing end users, data and brand is the next priority. As the number of cybersecurity threats has increased, chief security officers and their teams are also benefiting from an increase in prioritisation. Budget rebalancing will be inevitable as other projects are put on hold to safeguard organisations and invest more in security. Cybersecurity strategists should now think longer term, about the security of their processes and architectures. They should prioritise, adopt and accelerate the execution of critical projects like Zero Trust, Software Defined Security, Secure Access Service Edge (SASE) and Identity and Access Management (IAM) as well as automation to improve the security of remote users, devices and data.



Quote for the day:

"I am more afraid of an army of one hundred sheep led by a lion than an army of one hundred lions led by a sheep." -- Charles Maurice

Daily Tech Digest - July 28, 2020

The 6 Biggest Technology Trends In Accounting And Finance

When the internet of things, the system of interconnected devices and machines, combines with artificial intelligence, the result is the intelligence of things. These items can communicate and operate without human intervention and offer many advantages for accounting systems and finance professionals. The intelligence of things helps finance professionals track ledgers, transactions, and other records in real-time. With the support of artificial intelligence, patterns can be identified, or issues can be resolved quickly.  ... Robots don't have to be physical entities. In accounting and finance, robotic process automation (RPA) can handle repetitive and time-consuming tasks such as document analysis and processing, which is abundant in any accounting department. Freed up from these mundane tasks, accountants are able to spend time on strategy and advisory work. Intelligent automation (IA) is capable of mimicking human interaction and can even understand inferred meaning in client communication and adapt to an activity based on historical data. In addition, drones and unmanned aerial vehicles can even be deployed on appraisals and the like.


Transportation takes a leading edge with smart technology

As airports and aircraft become digitally connected through Edge IoT technology, many potential opportunities to improve air travel become an everyday reality. By harnessing Edge technology, 5G, and computer vision, many airlines are now able to drive significant operational efficiency. There are many use cases here, including: visual inspection-based pre-emptive maintenance that reduces downtime and delays, smarter scheduling and runway utilization, and cost-savings through smarter fuel usage. Safety and security can be significantly enhanced through Edge computing. Combining computer vision, computer audition, and analytics at the Edge can facilitate less disruptive and more rigorous safety and security. For example, facial recognition can be employed at smart gates to help tackle crime, and smart technology can be used to improve health screenings at airports. And there is huge potential for improving customer experience. By using Edge computing and smart technologies, the whole passenger journey can be connected and made smoother; from parking and arrival at the airport, through check-in, boarding, and inflight entertainment to arrival and baggage claim.


Attackers Exploiting High-Severity Network Security Flaw, Cisco Warns

The flaw specifically exists in the web services interface of Firepower Threat Defense (FTD) software, which is part of Cisco’s suite of network security and traffic management products; and its Adaptive Security Appliance (ASA) software, the operating system for its family of ASA corporate network security devices. The potential threat surface is vast: Researchers with Rapid7 recently found 85,000 internet-accessible ASA/FTD devices. Worse, 398 of those are spread across 17 percent of the Fortune 500, researchers said. The flaw stems from a lack of proper input validation of URLs in HTTP requests processed by affected devices. Specifically, the flaw allows attackers to conduct directory traversal attacks, which is an HTTP attack enabling bad actors to access restricted directories and execute commands outside of the web server’s root directory. Soon after patches were released, proof-of-concept (POC) exploit code was released Wednesday for the flaw by security researcher Ahmed Aboul-Ela. A potential attacker can view more sensitive files within the web services file system: The web services files may have information such as WebVPN configuration, bookmarks, web cookies, partial web content and HTTP URLs.


Scrum’s Nature: It Is a Tool; It Is Not About Love or Hate 

The question then is: Why would I “hate” a tool unsuited for the intended purpose or applied incompetently? Would I hate a hammer for not being capable of accurately driving a screw into a wooden beam? Probably not, as the hammer wasn’t designed for that purpose, and neither sheer will-power nor stamping with your feet will change the fact. ... The job of the Scrum Master is hence to support the Scrum team by removing impediments—problems the team members cannot solve by themselves-thus supporting this decentralized leadership approach. Moreover, those impediments are mostly situated at an organizational level. Here, change is not happening by simply “getting things done,” but by working with other stakeholders and their plans, agendas, objectives, etc. ... Agile software development is not about solving (code) puzzles all day long. As a part of creating new products in complex environments, it is first-of-all about identifying which problems are worth solving from a customer perspective. Once that is established, and Scrum’s empirical approach has proven to be supportive in that respect, we strive to solve these puzzles with as little code as possible.


Dave: Mobile Banking App Breach Exposes 3 Million Accounts

Dave says the breach traces to the Waydev analytics platform for engineering teams that it formerly used. "As the result of a breach at Waydev, one of Dave's former third-party service providers, a malicious party recently gained unauthorized access to certain user data at Dave, including user passwords that were stored in hashed form using bcrypt, an industry-recognized hashing algorithm," Dave says in its Saturday data breach notification. Waydev, which is based in San Francisco, first warned on July 2 that its service may have been breached. "We learned from one of our trial environment users about an unauthorized use of their GitHub OAuth token," Waydev says in a data breach notification posted on its site that details security measures it recommends all users take. "The security of your data is our highest priority. Therefore, as a precautionary measure to protect your account, we revoked all GitHub OAuth tokens." Beyond that notice, "we notified the potentially affected users" directly, Waydev's Mike Dums tells Information Security Media Group. The company says that it immediately hired a third-party cybersecurity firm, Bit Sentinel to help investigate the intrusion and lock down its environment, including having now fixed the vulnerability exploited by attackers.


Intelligent ways to tackle cyber attack

Absalom recommends that security practitioners balance the need for human oversight with the confidence to allow AI-supported controls to act autonomously and effectively. He says: “Such confidence will take time to develop, just as it will take time for practitioners to learn how best to work with intelligent systems.”  Given time to develop and learn together, Absalom believes the combination of human and artificial intelligence should become a valuable component of an organisation’s cyber defences. As Morris points out, fraud management, SIEM, network traffic detection and endpoint detection all make use of learning algorithms to identify suspicious activity – based on previous usage data and shared pattern recognition – to establish “normal” patterns of use and flag outliers as potentially posing a risk to the organisation. For companies with a relatively small and/or simple IT infrastructure, Wenham argues that the cost of an AI-enabled SIEM would probably be prohibitive while offering little or no advantage when coupled with good security hygiene. On the other hand, for an enterprise with a large and complex IT infrastructure, Wenham says the cost of an AI-enabled SIEM might well be justified. 


Are newer medical IoT devices less secure than old ones?

Mularski does concede that some particularly vulnerable old devices are often more isolated on the network by design, in part because they’re more recognizable as vulnerable assets. Windows 95-vintage x-ray machines, for example, are easy to spot as a potential target for a bad actor. “For the most part, I think most of the hospital environments, they do a good job at recognizing that they have these old deices, and the ones that are more vulnerable,” he said. This underlines a topic most experts on – simple awareness of the potential security flaws on a given network are central to securing healthcare networks. Greg Murphy is the CEO of Ordr, a network visibility and security startup based in Santa Clara. He said that both Mularski and Staynings have points in their favor. “Anyone who minimizes the issue of legacy devices needs to walk a mile in the shoes of the biomedical engineering department at a hospital,” he said. “[But] on the flipside, new devices that are being connected to the network have huge vulnerabilities themselves. Many manufacturers themselves don’t know what vulnerabilities their devices have.”


The Opportunity in App Modernization

Domain Driven Design and modeling techniques like SWIFT, Wardley Maps, Bounded Context Canvas have provided the technical and business heuristics in carving out microservices. There is however an emerging backlash from the complexity of microservices and an impetus to move towards simpler deployment architectures like modular monoliths. See To Microservices and Back Again. There are significant gaps that libraries and frameworks can fill by driving a backlog of stories and implementation from event storming or monolith decomposition. Generating a backlog of user stories and epics from event storming is a work of art and requires heavy facilitation because DDD Is Broken. Dividing business capabilities in sub-business capabilities is tough and candidate microservices need expert judgment before implementation. Observability tools and frameworks that aid in understanding the existing application runtime metadata paired with a profiler theoretically have the information needed to make recommendations on starting points to decomposing monoliths. A tool that has started to look at this problem is vFunction.


Ten ‘antipatterns’ that are derailing technology transformations

One of the biggest sources of impact in technology transformations comes from simplifying the path to production, the steps involved from defining requirements to releasing software and using it with disciplined repetition across teams. This requires a lot of organizational and executive patience, as the impacted teams—app development, operations, security, support—can take weeks and months to perfect this coordinated dance. Tools and architecture changes can help, but to be effective, they need to be paired with changes to engineering practices, processes, and behaviors. Launching programs for large architecture and tooling changes often requires minimal effort, catches the executive and board’s fancy, and represents that things are moving. However, in our experience, without changes to engineering practices, processes, and behaviors, such programs have minimal or no impact. ... After months of futile top-down incentives and nudges for tools adoption, the bank refocused on how the tools enabled a new set of engineering practices and collaboration between teams. It showed how the new tools could simplify the path to production. 


Is Robotic Process Automation As Promising As It Looks?

RPA works best when application interfaces are static, procedures don’t change, and data patterns stay stable – a mix that is progressively uncommon in today’s dynamic, digital scenario. The issue with RPA, in any case, isn’t that the tools aren’t clever enough. Rather, its main challenge is progressively about strength –handling the unexpected sudden changes in the IT world. Adding cognitive abilities to RPA doesn’t resolve these strength issues – you essentially end up with more intelligent technology that is still similarly as weak it was in the past. RPA is still in the phase of advancement, thus it can introduce difficulties that may bring about undesirable results. Consequently, it is difficult for associations to decide whether they ought to put their resources into robotic automation or wait until its extension. A far-reaching business model must be created while thinking about the implementation of this technology; else, it will be futile if returns are just marginal, which may not be worth taking the risk. RPA is equipped for dealing with specific tasks and assignments, however isn’t planned to deal with processes. Therefore, it appears to be legitimate to believe that combined with other more specific instruments, it can drive better execution.



Quote for the day:

"Leaders must encourage their organizations to dance to forms of music yet to be heard." -- Warren G. Bennis

Daily Tech Digest - July 27, 2020

DevOps: 5 things teams need from CIOs

To keep up with the pace of software and app releases, your developers and product teams need the ability to automate different test scenarios quickly, continuously, and in real-time. Your teams do not have months and weeks to test, analyze, and update code before a new release. Investing in the tools they need to migrate to more modern platforms gives teams the flexibility they need to meet demand. As convenient and trusted as legacy systems are, if you are serious about DevOps, updating your legacy systems and architecture should be a primary focus. This is especially important as technologies like artificial intelligence, augmented reality, and virtual reality gain momentum and popularity. When planning budgets into the next year, consider designating resources to replace these legacy systems. ... Ensure that each team works well on its own before you have teams work together. For different teams to work together successfully, the individuals on each team must be able to work with each other. Make sure that development personnel attends all relevant meetings and discussions with operations/IT teams, and vice versa. Listen. Concentrate on what your team members are communicating. Be mindful; do not take a passive approach or focus only on your response. 


The 4 essential pillars of cloud security

One of the key constructs of zero-trust computing is continuous improvement. An effective cloud security solution should enable ongoing insight into the entire cloud environment, thereby creating the opportunity for ongoing improvement. ... The second pillar involves providing security for end systems, managed services or different workloads running inside the cloud – commonly called platform as a service. This compute-level security has two key components. First is automated vulnerability management, which identifies and prevents vulnerabilities across the entire application lifecycle while prioritizing risk for cloud-native environments. ... Protecting the network is traditionally integral to on-premises environments but is equally important for the cloud. There are two major components of network protections. One is microsegmentation, a method of creating zones to isolate workloads from one another and secure them individually. This is at the heart of zero trust. By putting up roadblocks between applications and workloads, microsegmentation makes it much more difficult for would-be attackers to move laterally from one infected host to another. The method employs containerization (of the app and its operating environment) and segmenting the application itself in order to minimize any damage.


Microsoft told employees to work from home. One consequence was brutal

Perhaps, you might say, no one's really working any harder then. Yet when you're in an office, don't you also take time out to go for a walk (and scream at your boss), have a peaceful lunch (and scream at your boss), call your cable provider (and scream at customer service) or merely stare into space (and scream at the absurdity of existence)? The problem -- and for some bosses, great delight -- of modern technology is that it makes you believe employees are available any time, any place, anywhere. And really, how many humans are at their best earlier than they're used to or later than they'd prefer to Please, I'll get to the happier elements of this research shortly. But when working from home Microsoft's employees apparently spent 10 percent more time in meetings. So, let's see, your work hours have expanded and you're spending more time in meetings. Where's the hope? Well, the researchers muse that there needed to be more meetings because there wasn't the opportunity for chance encounters. You know, in corridors and restrooms. And they believe hope lies in the fact that individual meeting times were shorter.


How to Build a Security Culture

Content is one of the biggest mistakes made in security awareness training. If your content is weak, boring, unrelatable, or filled with legal language, no one will pay attention. Although your intentions are great, you have to understand that dry paragraphs of plain text about hackers will not influence a behavior change. As we learned before, to create a culture you have to drive influence. And to drive influence, you need support. Just sending out an email once a month or once a quarter, or hanging a poster up that says ‘don’t get phished’ will do nothing to make an impact. In order to create a security culture shift, you need to understand what drives change. Change is not easy, and when it comes to employees changing their behavior, you have many barriers ahead. Change requires taking an established habit, associating that habit with negative behavior, and then influencing a new habit with a desired, positive outcome. Essentially know why something they are doing is wrong and learning how to change the negative habit they’ve been demonstrating. So now that we learned all of the challenges in creating a culture of security, how do we actually create one ourselves?


Use cases for blockchain in healthcare

One major issue that is present within healthcare is the production of counterfeit prescription drugs. The World Health Organisation (WHO) has estimated that one in 10 medical products in low and middle income countries are forged or substandard. Companies such as Quant aim to solve this issue using smart contracts and interoperability between blockchains to cut out middlemen and increase efficiency. “Data from embedded identification markers used to track individual products and components, can be recorded onto distributed ledger technology (DLT) to provide a single source of truth with full transparency, accuracy, and accountability at every stage in the supply chain,” explained Gilbert Verdian, founder and CEO of Quant. “This is achieved through the shared nature of the ledger and the immutability that it offers, and with the data available to all participants, this solution has the potential to eliminate the need for intermediaries – and hence, opportunistic criminals – abusing the system. “The impact of such an approach would be dramatic. In fact, according to a new report by the market intelligence company BIS Research, blockchain-based supply chains would reduce revenue loss to pharmaceutical companies by up to $43 billion annually, as well as benefit others who inadvertently purchase counterfeit drugs.”


Data scientists are used to making up the rules. Now they're getting some of their own to follow.

Many, if not most, technology-oriented organizations already have ethical standards of some sort, which were developed to ensure that innovation is designed responsibly within their own ranks. The BCS, for example, asks practitioners to sign up to a code of conduct, which determines among other principles that IT workers should act in the public interest, with integrity, competence and diligence; and that they should never take on a task that they don't have the skills to complete. Similarly, the RSS's code of conduct defends acting in the public interest, fulfilling obligations to employers and clients, and showing competence and integrity. And the RAEng is governed by principles of openness, fairness, respect for the law, accuracy and rigor. Even big tech has jumped on the bandwagon, with Google committing to responsible technology, or Microsoft drafting guidelines for 'ethical and trustworthy AI', to name but two.  But while organizations have been pulling together ethics committees and writing up white papers on the rules that should govern the use of data, not much was done at the individual level. Yet the source of all technology is the brain of those who come up with new ideas. 


Cybersecurity for a Remote Workforce

Start with stopgap measures that can be implemented immediately, such as revising existing cyber risk guidelines, requirements, and controls on how employees access data and communicate with a company’s network. Rules of behavior analytics need to be adjusted to consider changes to the “normal” behavior of employees, many of whom now work outside standard business hours so that security teams can effectively focus investigations. Then examine new security tools and requirements for sharing and maintaining private information with vendors. For example, organizations may need to adopt more robust data loss controls, traffic analysis tools, and access restrictions. Ensure that vendors that aren’t currently prepared for heightened cyberattack risk commit to developing cyber preparedness plans to safely handle information or interact with your corporate network. Review changes to boost your technology and security infrastructure today, even if such changes may take years to implement. Some organizations may want to speed up their cloud strategies so that their IT resources can rapidly meet demand spikes from large-scale remote work.


Digital transformation: 8 ways to spot your organization's rising leaders

The best digital transformation leaders know what the biggest pain points are inside the organization, says Lyke-Ho-Gland – and they create a digital roadmap addressing those points that the larger organization will get behind. ... “Outcome-focused leaders understand the need to drive that focus, assess any midcourse requests against the program commitments, and communicate relentlessly to reinforce expectations of sponsors.” They understand, measure, and report on both qualitative and quantitative benefits and make sure all project actions are structured to deliver those outcomes. ... “The most successful DT leaders can compellingly market those solutions to business stakeholders so that they adopt the new tools and ways of working,” says Lauren Trees, who heads up APQC’s Knowledge Management research group. ISG’s Hall describes one successful CIO he worked with as the best salesperson in the organization: “He had implemented all of the company’s products within IT (eat your own cooking) and talked to prospects daily on the challenges he was able to overcome with the product suite,” Hall recalls.


Block/Allow: The Changing Face of Hacker Linguistics

The most recent wave of changes demonstrates that more, and more powerful, tech organizations take watching their language as a serious concern, even though the history of the terms predates their use in computing, says Christina Dunbar-Hester, an associate professor of communication at the University of Southern California and the author of "Hacking Diversity: The Politics of Inclusion in Open Technology Cultures." "Language is symbolic and powerful but can also feel superficial. Certainly in the moment we're in, some people are asking to abolish the police, not to change unfortunate computer terms," she says. "But Black Lives Matter and the current moment gives people the ammunition to say that language does matter." However, there's a difference between changing word choices in documentation and getting people to change the words they use on a daily basis. Convincing developers, hackers, and other professionals to switch to more inclusive language has been a long struggle that predates the current norms. Tech has long faced a serious imbalance in how it pays and promotes white men more than women and black, indigenous, and people of color.


Data governance and context for evidence-based medicine: Transparency and bias in COVID-19 times

A number of people, including Cochrane excommunicate Peter Gøtzsche, argue that there can be a lot of bias in RCTs. This has largely to do with the fact that the vast majority of RCT data come from pharmaceutical companies, creating a conflict of interest. If aggregators like Cochrane do not validate the raw data they offer access to, they may be whitewashing them. Case in point: Surgisphere. What was initially referred to as the most influential COVID-19 related research up to date was called into question as to the result of lack of transparency regarding the origin and trustworthiness of its data. The research used data sourced from Surgisphere, a startup claiming to operate as a Data Broker, providing access to data from hospitals worldwide. However, whether that data is veracious, or was acquired transparently is not clear. As a result, research findings were put into question, and related decisions made by the WHO were reverted. Scales' opinion is that researchers have a responsibility to verify the source of the data they use. ... Over-reliance on RCTs may be part of the problem. RCTs can be enormous multi-year undertakings, summarized in what's often an eight-page journal article. Many important details and potential biases are left out. 



Quote for the day:

"Leadership means forming a team and working toward common objectives that are tied to time, metrics, and resources." -- Russel Honore

Daily Tech Digest - July 26, 2020

Researchers develop new learning algorithm to boost AI efficiency

A working group led by two computer scientists Wolfgang Maass and Robert Legenstein of TU Graz has adopted this principle in the development of the new machine learning algorithm e-prop (short for e-propagation). Researchers at the Institute of Theoretical Computer Science, which is also part of the European lighthouse project Human Brain Project, use spikes in their model for communication between neurons in an artificial neural network. The spikes only become active when they are needed for information processing in the network. Learning is a particular challenge for such less active networks, since it takes longer observations to determine which neuron connections improve network performance. Previous methods achieved too little learning success or required enormous storage space. E-prop now solves this problem by means of a decentralized method copied from the brain, in which each neuron documents when its connections were used in a so-called e-trace (eligibility trace). The method is roughly as powerful as the best and most elaborate other known learning methods.


Data Leadership Book Review and Interview

The Data Leadership Framework is first about acknowledging that there is a whole bunch of stuff an organization needs to do to make the most of data. The five DLF Categories are where we evaluate an organization’s data capabilities and figure out where they are struggling most among the complexity. The twenty-five DLF Disciplines are where we then focus energy (i.e., invest our limited resources) to make the biggest outcomes. By creating relative balance across the DLF Categories, we maximize the overall impact of our data efforts. This is what we need to be doing all the time with data, but without something like the Data Leadership Framework, the problems can feel overwhelming and people have trouble figuring out where to start, or what to do next. This is true of everybody, from data architects and developers to the CEO. If we can use the Data Leadership Framework to make sense amidst the chaos, the individual steps themselves are much less daunting. Data competency is no longer a “nice-to-have” item. From data breaches to analytics-driven disruptors in every industry, this is as big of a deal to businesses as cash flow.


Enterprise Architecture for Managing Information Technology Standards

While globalization is excellent for business as it extends opportunities for markets that were previously closed and permits the sharing of ideas and information across different platforms, it could threaten the budgetary plans of SMBs. Investments in licensing, infrastructure, and global solutions, in general, hit this segment harshly. Lack of Talent Pool: This problem is primarily limited to the technology segment. Around half of employees lack the critical thinking skills that would qualify them to grow further in this field. The IT team faced the most significant hurdle so far is having members that aren’t smart enough to put a general hardware and software security environment cost-effectively. IT Policy Compliance Failure: Specific technologies used by IT projects don’t comply with the policy rules as defined by their departments. IT departments are sometimes unaware of techniques used by their teams and business stakeholders, increasing the risk of uncontrolled data flows and non-compliance. Besides, these technologies are sometimes incompatible with the existing portfolio. This increases IT debt, primarily if technology standards are not enforced.


IoT Architecture: Topology and Edge Compute Considerations

Network engineers often have experience with a particular topology and may assume it can be used in any setting, but sometimes another choice would be more optimal for a different use case. To determine whether a mesh networking topology is a good choice for your application, it is important to understand the pros and cons of this strategy. A critical factor to analyze is your system's timing requirements. Mesh networking topologies route data from node to node across a network that is architected in a mesh. So the "hops" need to be accounted for due to added latency. Do you need the data back in 100 mS or can you live with once a second? ... Wireless point-to-point (PTP) and point-to-multipoint (PTMP) are topologies used for connectivity in a wide range of applications, such as use cases where you want to replace cables with wireless communication. These protocols communicate between two devices (point-to-point) or from one device to many (point-to-multipoint). There are a few factors to consider, such as distance, timing and battery power that may indicate if a PTP network is needed versus a mesh network.


An introduction to confidential edge computing for IoT security

Recent attacks, even outside of IoT, showed that hackers exploited weak configurations of public cloud services to access sensitive data. The reason that hackers succeeded in obtaining sensitive information stored on a public cloud had nothing to do with the security mechanisms implemented by the cloud provider but were rather the result of little mistakes made by the end users, typically in the Web Application Firewall (WAF) that controls the access to the cloud network or by leaving credentials unprotected. These little mistakes are almost inevitable for companies that have a cloud-only infrastructure. However, by demarcating sensitive and non-sensitive information, this could help their IT teams in setting up the cloud services to achieve safer security practices. Those mistakes emphasize the need for a broader security expertise aiming at defining the security architecture to be enforced on the overall system and at finding out whether the security features of the cloud provider need to be completed by additional protection mechanisms. A first logical step consists of demarcating sensitive and non-sensitive information, to help the IT team establish appropriate priorities.


How IoT Devices are Rapidly Revolutionizing the World of Small Businesses

Small business owners may want to take some time to look through a list of the top IoT software rankings before they decide on a single platform. It can be difficult to migrate to another one after your firm has become heavily invested in a certain type of technology. This is especially true of those who plan to primarily use consumer-grade equipment that often goes through various revisions as market pressures force engineers to redesign certain aspects of their builds. Keep in mind that all Internet of Things devices include some sort of embedded general purpose computer. This means that each piece of smart equipment is free to share information collected from onboard peripherals. That makes it easy to learn more about how different circumstances impact your business. Think of a hotel or restaurant that has multiple rooms. Each of these have an adjustable thermostat. If some of them are set too high or low, then the business in question may end up losing thousands by using too much energy. A number of service providers in the hospitality industry now use IoT software to monitor energy usage throughout entire buildings.


The Journey to Effective Data Management in HPC

High Performance Computing (HPC) continues to be a major resource investment of research organizations worldwide. Large datasets are used and generated by HPC, and these make data management a key component of effectively using the expensive resources that underlie HPC infrastructure. Despite this critical element of HPC, many organizations do not have a data management plan in place. As an example of data generation rates, the total storage footprint worldwide from DNA sequencing alone is estimated at over 2 Exabytes by 2025, most of which will be processed and stored in an HPC environment. This growth rate causes an immense strain on life science organizations. But it is not only big data from life sciences that is stressing HPC infrastructure, but research institutions like Lawrence Livermore National Labs (LLNL) also generate 30TB of data a day. This data serves to support their research and development efforts applied to national security, and these daily data volumes can also be expected to increase. As the HPC community continues to generate massive amounts of file data, drawing insights, making that data useful, and protecting the data becomes a considerable effort with major implications.


Taking A Deep Look at DLT (Distributed Ledger Technology)

A great deal of effort and investment is continuously going into mitigating blockchain’s scalability issues. One of the headline motivations for this directive is to level-up the user experience on blockchain networks to accommodate a diverse range of concurrent activity without compromising any of the blockchain elements. When this is achieved – blockchain architects and companies will have a more comprehensive suite of blockchain tools to meet new and growing needs in the market. For a long time blockchain has been unfairly subjected to pessimistic scrutiny that undermine its value. Unfair in a sense that blockchain is brilliant, revolutionary and still young. But then again, nothing exists in a vacuum totally free from pessimistic sentiments. Everything in existence has some criticism attached to it. Even so – blockchain is resilient! It is here for good – and so is DLT. If you look at DLT you will see that many DLT based start-ups offer business-to-business solutions. Distributed ledgers are well poised for companies because they address multiple-infrastructural issues plagued in industries. One of them is databases. Given how disparate and complex organizations have grown - legacy databases have become victim to inefficiencies and security loopholes.


Adapting online security to the ways we work, remotely and post-coronavirus

Not only were many companies unprepared for the mass transition to remote work, but they were also caught off guard by the added technology and security needs. According to CNBC, 53 senior technology executives say their firms have never stress-tested their systems for a crisis like this. For example, when companies are working from the office, it is easier for IT teams to identify threat agents that make attempts into systems since hackers’ locations are removed from those offices. However, with employees dispersed at their homes, identifying these foreign breaches are less recognizable. Companies have also been caught flatfooted during this crisis by relying on employees to use their personal devices instead of providing a separate work device. This prevents IT teams from identifying suspicious activity. To keep employee and company information secure, it is up to the CISO and IT decision-makers to create and strictly enforce a regular practice for accessing, editing and storing their data. Most employees value productivity over security. This is problematic. Employees gravitate towards tools and technology they prefer to get their work done effectively.


Is Your Approach to Data Protection More Expensive than Useful?

Now more than ever, data is the lifeblood of an organization – and any incidence of data loss or application unavailability can take a significant toll on that business. With the recent rise in cyberattacks and exponential data growth, protecting data has become job #1 for many IT organizations. Their biggest hurdle: managing aging infrastructure with limited resources. Tight budgets should not discourage business leaders from modernizing data protection. Organizations that hang onto older backup technology don’t have the tools they need to face today’s threats. Rigid, siloed infrastructures aren’t agile or scalable enough to keep up with fluctuations in data requirements, and they are based on an equally rigid backup approach. Traditional backup systems behave like insurance policies, locking data away until you need it. That’s like having an extra car battery in the garage, waiting for a possible crisis. The backup battery might seem like a reasonable preventive measure, but most of the time it’s a waste of space, and if the crisis never arises it’s an unnecessary upfront investment, more expensive than useful. In the age of COVID-19 where cash is king and onsite resources are particularly limited, some IT departments are postponing data protection modernization, looking to simplify overall operations and lower infrastructure cost first.



Quote for the day:

"Do what you can, where you are, with what you have." -- Teddy Roosevelt