Daily Tech Digest - November 24, 2020

Why securing the DNS layer is crucial to fight cyber crime

When left insecure, DNS servers can result in devastating consequences for businesses that fall victim to attack. Terry Bishop, solutions architect at RiskIQ, says: “Malicious actors are constantly looking to exploit weak links in target organisations. A vulnerable DNS server would certainly be considered a high-value target, given the variety of directions that could be taken once compromised. “At RiskIQ, we find most organisations are unaware of about 30% of their external-facing assets. That can be websites, mail servers, remote gateways, and so on. If any of these systems are left unpatched, unmonitored or unmanaged, it presents an opportunity for compromise and further potential exploit, whether that is towards company assets, or other more valuable infrastructure such as DNS servers are dependent on the motives of the attacker and the specifics of the breached environment.” Kevin Curran, senior member at the Institute of Electrical and Electronics Engineers (IEEE) and professor of cyber security at Ulster University, agrees that DNS attacks can be highly disruptive. In fact, an improperly working DNS layer would effectively break the internet, he says.


The Dark Side of AI: Previewing Criminal Uses

Criminals' Top Goal: Profit, If that's the high level, the applied level is that criminals have never shied away from finding innovative ways to earn an illicit profit, be it through social engineering refinements, new business models or adopting new types of technology. And AI is no exception. "Criminals are likely to make use of AI to facilitate and improve their attacks by maximizing opportunities for profit within a shorter period, exploiting more victims and creating new, innovative criminal business models - all the while reducing their chances of being caught," according to the report. Thankfully, all is not doom and gloom. "AI promises the world greater efficiency, automation and autonomy," says Edvardas Šileris, who heads Europol's European Cybercrime Center, aka EC3. "At a time where the public is getting increasingly concerned about the possible misuse of AI, we have to be transparent about the threats, but also look into the potential benefits from AI technology." ... Even criminal uptake of deepfakes has been scant. "The main use of deepfakes still overwhelmingly appears to be for non-consensual pornographic purposes," according to the report. It cites research from last year by the Amsterdam-based AI firm Deeptrace , which "found 15,000 deepfake videos online..."


Flash storage debate heats up over QLC SSDs vs. HDDs

Rosemarin said some vendors front end QLC with TLC flash, storage class memory or DRAM to address caching and performance issues, but they run the risk of scaling problems and destroying the cost advantage that the denser flash technology can bring. "We had to launch a whole new architecture with FlashArray//C to optimize and run QLC," Rosemarin said. "Otherwise, you're very quickly going to get in a position where you're going to tell clients it doesn't make sense to use QLC because [the] architecture can't do it cost-efficiently." Vast Data's Universal Storage uses Intel Optane SSDs, built on faster, more costly 3D XPoint technology, to buffer writes, store metadata and improve latency and endurance. But Jeff Denworth, co-founder and chief marketing officer at the startup, said the system brings cost savings over alternatives through better longevity and data-reduction code, for starters. "We ask customers all the time, 'If you had the choice, would you buy a hard drive-based system, if cost wasn't the only issue?' And not a single customer has ever said, 'Yeah, give me spinning rust,'" Denworth said. Denser NAND flash chip technology isn't the only innovation that could help to drive down costs of QLC flash. Roger Peene, a vice president in Micron's storage business unit, spotlighted the company's latest 176-layer 3D NAND that can also boost density and lower costs.


Instrumenting the Network for Successful AIOps

The highest quality network data is obtained by deploying devices such as network TAPs that mirror the raw network traffic. Many vendors offer physical and virtual versions of these to gather packet data from the data center as well as virtualized segments of the network. AWS and Google Cloud have both launched Virtual Private Cloud (VPC)traffic/packet mirroring features in the last year that allow users to duplicate traffic to and from their applications and forward it to cloud-native performance and security monitoring tools, so there are solid options for gathering packet data from cloud-hosted applications too.  The network taps let network monitoring tools view the raw data without impacting the actual data-plane. When dealing with high sensitivity applications such as ultra-low-latency trading, high quality network monitoring tools use timestamping with nanosecond accuracy to identify bursts with millisecond resolution which might cause packet drops that normal SNMP type counters can’t explain. This fidelity of data is relevant in other high quality applications such as real-time video decoding, gaming multicast servers, HPC and other critical IOT control systems. 


How to create an effective software architecture roadmap

The iteration model demonstrates how the architecture and related software systems will change and evolve on the way to a final goal. Each large iteration segment represents one milestone goal of an overall initiative, such as updating a particular application database or modernizing a set of legacy services. Then, each one of those segments contains a list of every project involved in meeting that milestone. For instance, a legacy service modernization iteration requires a review of the code, refactoring efforts, testing phases and deployment preparations. While architects may feel pressured to create a realistic schedule from the start of the iteration modeling phase, Richards said that it's not harmful to be aspirational, imaginative or even mildly unrealistic at this stage. Since this is still an unrefined plan, try to ignore limitations like cost and staffing, and focus on goals. ... Once an architect has an iteration model in place, the portfolio model injects reality into the roadmap. In this stage, the architect or software-side project lead analyzes the feasibility of the overall goal. They examine the initiative, the requirements for each planned iteration and the resources available for the individual projects within those iterations. 


How new-age data analytics is revolutionising the recruitment and hiring segment

There are innumerable advantages attached to opting for AI over an ordinary recruitment team. With the introduction of AI, companies can easily lower the costs involved in maintaining a recruitment team. The highly automated screening procedures select quality candidates that in turn will help the organization grow and retain better personnel – a factor that is otherwise overlooked in the conventional recruitment process. Employing AI and ML automates the whole recruitment process and helps eliminate the probability of human errors. Automation increases efficiency and improves the performance of other departments of the company. The traditional recruitment process tends to be very costly. Several teams are often needed for the purpose of hiring people in a company. But with the help of AI and ML, the unnecessary costs can be done away with and the various stages of hiring can all be conducted on a single dedicated platform. Additionally, if the company engages in a lot of contract work, then AI can be used for analysing the project plan and predicting the kinds, numbers, ratio and skills of workers that may be required for the purpose. The scope of AI and ML cannot be undermined by the capabilities of current systems.


6 experts share quantum computing predictions for 2021

"Next year is going to be when we start seeing what algorithms are going to show the most promise in this near term era. We have enough qubits, we have really high fidelities, and some capabilities to allow brilliant people to have a set of tools that they just haven't had access to," Uttley said. "Next year what we will see is the advancement into some areas that really start to show promise. Now you can double down instead of doing a scattershot approach. You can say, 'This is showing really high energy, let's put more resources and computational time against it.' Widespread use, where it's more integrated into the typical business process, that is probably a decade away. But it won't be that long before we find applications for which we're using quantum computers in the real world. That is in more the 18-24 month range." Uttley noted that the companies already using Honeywell's quantum computer are increasingly interested in spending more and more time with it. Companies working with chemicals and the material sciences have shown the most interest he said, adding that there are also healthcare applications that would show promise.


How Industrial IoT Security Can Catch Up With OT/IT Convergence

The bigger challenge, he says, is not in the silicon of servers and networking appliances but in the brains of security professionals. "The harder problem, I think, is the skills problem, which is that we have very different expertise existing within companies and in the wider security community, between people who are IT security experts and people who are OT security experts," Tsonchev says. "And it's very rare to find one individual where those skills converge." It's critical that companies looking to solve the converged security problem, whether in technology or technologists, to figure out what the technology and skills need to look like in order to support their business goals. And they need to recognize that the skills to protect both sides of the organization may not reside in a single person, Tsonchev says. "There's obviously a very deep cultural difference that comes from the nature of the environments characterized by the standard truism that confidentiality is the priority in IT and availability is the priority in OT," he explains. And that difference in mindset is natural – and to some extent essential – based on the requirements of the job. Where the two can begin to come together, Tsonchev says, is in the evolution away from a protection-based mindset to a way of looking at security based on risk and risk tolerance.


Dark Data: Goldmine or Minefield?

The issue here is that companies are still thinking in terms of sandboxes even when they are face-to-face with the entire beach. A system that considers analytics and governance flip sides of the same coin and incorporates them synergistically across all enterprise data is called for. Data that has been managed has the potential to capture the corpus of human knowledge within the organization, reflecting the human intent of a business. can offer substantial insight into employee work patterns, communication networks, subject matter expertise, and even organizational influencers and business processes. It also holds the potential for eliminating duplicative human effort, which can be an excellent tool to increase productivity and output. The results of this alone are a sure-fire way to boost productivity, spot common pain points that may not be effective to the workstream and can share insights to organizations where untapped potential may lay. Companies that have successfully bridged information management with analytics are answering fundamental business questions that have massive impact on revenue: Who are the key employees? ... With the increase in sophistication of analytics and its convergence with information governance, we will likely see a renaissance for this dark data that is presently largely a liability.


NCSC issues retail security alert ahead of Black Friday sales

“We want online shoppers to feel confident that they’re making the right choices, and following our tips will reduce the risk of giving an early gift to cyber criminals. If you spot a suspicious email, report it to us, or if you think you’ve fallen victim to a scam, report the details to Action Fraud and contact your bank as soon as you can.” Helen Dickinson, chief executive of the British Retail Consortium (BRC), added: “With more and more of us browsing and shopping online, retailers have invested in cutting-edge systems and expertise to protect their customers from cyber threats, and the BRC recently published a Cyber Resilience Toolkit for extra support to help to make the industry more secure. “However, we as customers also have a part to play and should follow the NCSC’s helpful tips for staying safe online.” The NCSC’s advice, which can be accessed online at its website, includes a number of tips, including being selective about where you shop, only providing necessary information, using secure and protected payments, securing online accounts, identifying potential phishing attempts, and how to deal with any problems. Carl Wearn, head of e-crime at Mimecast, commented: “Some of the main things to look out for include phishing emails and brand spoofing, as we are likely to see an increase in both.



Quote for the day:

“Focus on the journey, not the destination. Joy is found not in finishing an activity but in doing it.” -- Greg Anderson

Daily Tech Digest - November 23, 2020

Superhuman resources: How HR leaders have redefined their C-suite role

CHROs have to be able to envision how the strategy will be executed, the talents and skills required to accomplish the work, and the qualities needed from leaders to maximize the organization’s potential. Increasingly, that requires a nuanced understanding of how technology and humans will interact. “HR leaders sit at a crossroads because of the rise of artificial intelligence and can really predict whether a company is going to elevate their humans or eliminate their humans,” said Ellyn Shook, the CHRO of professional-services firm Accenture. “We’re starting to see new roles and capabilities in our own organization, and we’re seeing a whole new way of doing what we call work planning. The real value that can be unlocked lies in human beings and intelligent technologies working together.” ... CHROs must operate at a slightly higher altitude than their peers on the leadership team to ensure that the different parts of the business work well together. At their best, these leaders view the entire organization as a dynamic 3D model, and can see where different parts are meshing well and building on other parts, and also where there are gaps and seams. The key is to make the whole organization greater than the sum of its parts.


Three IT strategies for the new era of hybrid work

While the hyper-automation strategy will make life much easier for IT teams by delivering on greater automated experiences, there will always be issues that humans will have to resolve. Organisations must equip their IT teams with the tools to handle these issues remotely and securely to succeed in an increasingly complex environment. This begins with utilising AI and building on deep learning capabilities that provide critical information to IT teams in real time. Say an employee is unable to access restricted customer information from his home network to complete a sales order and needs to enable VPN access. With the right software platforms, the IT representative will be able to guide him remotely, to push the necessary VPN software to his device, configure the necessary access information and provision his access through automation scripts. IT would also be able to discover the model of the router used in his home network if required and assist in router settings if the employee assigns the rights and authorisation. IT can also assess its vulnerabilities and advise the employee accordingly. In the past, the work would have to completed in the office. With hybrid work environments, going back to the office may not even be an option.


Security pros fear prosecution under outdated UK laws

MP Ruth Edwards, who previously led on cyber security policy for techUK, said: “The Computer Misuse Act, though world-leading at the time of its introduction, was put on the statute book when 0.5% of the population used the internet. The digital world has changed beyond recognition, and this survey clearly shows that it is time for the Computer Misuse Act to adapt. “This year has been dominated by a public health emergency – the coronavirus pandemic, but it has also brought our reliance on cyber security into stark relief. We have seen attempts to hack vaccine trials, misinformation campaigns linking 5G to coronavirus, a huge array of coronavirus-related scams, an increase in remote working and more services move online. “Our reliance on safe and resilient digital technologies has never been greater. If ever there was going to be a time to prioritise the rapid modernisation of our cyber legislation, and review the Computer Misuse Act, it is now,” she said. The study is the first piece of work to quantify and analyse the views of the wider security community in the UK on this issue, and the campaigners say they have found substantial concerns and confusion about the CMA that are hampering the UK’s cyber defences.


An In-Depth Explanation of Code Complexity

By knowing how many independent paths there are through a piece of code, we know how many paths there are to test. I'm not advocating for 100% code coverage by the way—that's often a meaningless software metric. However, I always advocate for as high a level of code coverage as is both practical and possible. So, by knowing how many code paths there are, we can know how many paths we have to test. As a result, you have a measure of how many tests are required, at a minimum, to ensure that the code's covered. ... By reducing software complexity, we can develop with greater predictability. What I mean by that is we're better able to say—with confidence—how long a section of code takes to complete. By knowing this, we're better able to predict how long a release takes to ship. Based on this knowledge the business or organization is better able to set its goals and expectations, especially ones that are directly dependent on said software. When this happens, it’s easier to set realistic budgets, forecasts, and so on. Helping developers learn and grow is the final benefit of understanding why their code is considered complex. The tools I've used to assess complexity up until this point don't do that. What they do is provide an overall or granular complexity score.


How DevOps Teams Get Automation Backwards

Do you know what data (and metadata) needs to be backed up in order to successfully restore? Do you know how it will be stored, protected and monitored? Does your storage plan comply with relevant statutes, such as CCPA and GDPR. Do you regularly execute recovery scenarios, to test the integrity of your backups and the effectiveness of your restore process? At the heart of each of the above examples, the problem is due in large part to a top-down mandate, and a lack of buy-in from the affected teams. If the DevOps team has a sense of ownership over the new processes, then they will be much more eager to take on any challenges that arise. DevOps automation isn’t the solution to every problem. Automated UI tests are a great example of an automation solution that’s right for some types of organizations, but not for others. These sorts of tests, depending on frequency of UI changes, can be fragile and difficult to manage. Therefore, teams looking to adopt automated UI testing should first assess whether the anticipated benefits are worth the costs, and then ensure they have a plan for monitoring and maintaining the tests. Finally, beware of automating any DevOps process that you don’t use on a frequent basis.


Security by Design: Are We at a Tipping Point?

A big contributor for security flat-footedness is the traditional “trust but verify” approach, with bolt-on and reactive architectures (and solutions) that make security complex and expensive. Detecting a threat, assessing true vs. false alerts, responding to incidents holistically and doing it all in a timely fashion demands a sizeable security workforce; a strong, well-practiced playbook; and an agile security model. As we have learned over the years, this has been hard to achieve in practice—even harder for small or mid-size organizations and those with smaller budgets. Even though dwell time has reduced in the last few years, attackers routinely spend days, weeks or months in a breached environment before being detected. Regulations like the EU General Data Protection Regulation (GDPR) mandate reporting of notifiable data breaches within 72 hours, even as the median dwell time stands at 56 days, rising to 141 days for breaches not detected internally. Forrester analyst John Kindervag envisioned a new approach in 2009, called “zero trust.” It was founded on the belief that trust itself represents a vulnerability and security must be designed into business with a “never trust, always verify” model.


Distributors adding security depth

“With the rapidly changing security landscape, and home working seemingly here to stay, this partnership will help organisations alleviate these security pressures through one consolidated cloud solution. Together with Cloud Distribution, we will continue to expand our UK Partner network, ensuring we are offering robust cloud security solutions with our approach that takes user organisations beyond events and alerts, and into 24/7 automated attack prevention,” he said.  Other distributors have also taken steps to add depth to their portfolios. Last month, e92plus also moved to bolster its offerings with the signing of web security player Source Defense. The distie is responding to the threats around e-commerce and arming resellers with tools to help customers that have been forced to sell online during the pandemic. The shift online has come as threats have spiked and the criminal activity around online transactions has increased. “As more businesses look to transact business online, bad actors are exploiting client-side vulnerabilities that aren’t protected by traditional solutions like web application firewalls,” said Sam Murdoch, managing director at e92cloud.


3 Steps CISOs Can Take to Convey Strategy for Budget Presentations

CISOs recognize they cannot reduce their organization's cyber-risk to zero. Still, they can reduce it as much as possible by focusing on eliminating the most significant risks first. Therefore, when developing a budget, CISOs should consider a proactive risk-based approach that homes in on the biggest cyber-risks facing the business. This risk-based approach allows the CISO to quantify the risk across all areas of cyber weakness, and then prioritize where efforts are best expended. This ensures maximum impact from fixed budgets and teams. The fact is, the National Institute of Standards and Technology reports that an average breach can cost an organization upward of $4 million — more costly than the overall budget for many organizations. Consider a scenario where one CISO invests heavily in proactive measures, successfully avoiding a major breach, while another invests primarily in reactive measures and ends up cleaning up after a major breach. The benefit is that one (the proactively inclined CISO) ends up spending 10x less overall. ... While there is more awareness among top leadership and board members regarding the daunting challenges of cybersecurity, a board member's view of cybersecurity is primarily concerned with cybersecurity as a set of risk items, each with a certain likelihood of happening with some business impact.


Keeping data flowing could soon cost billions, business warned

As soon as the UK leaves the EU, it will also cease to be part of the GDPR-covered zone – and other mechanisms will be necessary to allow data to move between the two zones. The UK government, for its part, has already green-lighted the free flow of digital information from the UK to the EU, and has made it clear that it hopes the EU will return the favor. This would be called an adequacy agreement – a recognition that UK laws can adequately protect the personal data of EU citizens. But whether the UK will be granted adequacy is still up for debate, with just over one month to go. If no deal is achieved on data transfers, companies that rely on EU data will need to look at alternative solutions. These include standard contractual clauses (SCCs), for example, which are signed contracts between the sender and the receiver of personal data that are approved by an EU authority, and need to be drawn for each individual data transfer. SCCs are likely to be the go-to data transfer mechanism in the "overwhelming majority of cases," according to the report, and drafting the contracts for every single relevant data exchange will represent a costly bureaucratic and legal exercise for many firms. UCL's researchers estimated, for example, that the London-based university would have to amend and update over 5,000 contracts.


Even the world’s freest countries aren’t safe from internet censorship

Ensafi’s team found that censorship is increasing in 103 of the countries studied, including unexpected places like Norway, Japan, Italy, India, Israel and Poland. These countries, the team notes, are rated some of the world’s freest by Freedom House, a nonprofit that advocates for democracy and human rights. They were among nine countries where Censored Planet found significant, previously undetected censorship events between August 2018 and April 2020. They also found previously undetected events in Cameroon, Ecuador and Sudan. While the United States saw a small uptick in blocking, mostly driven by individual companies or internet service providers filtering content, the study did not uncover widespread censorship. However, Ensafi points out that the groundwork for that has been put in place here. “When the United States repealed net neutrality, they created an environment in which it would be easy, from a technical standpoint, for ISPs to interfere with or block internet traffic,” she said. “The architecture for greater censorship is already in place and we should all be concerned about heading down a slippery slope.”



Quote for the day:

"Beginnings are scary, endings are usually sad, but it's the middle that counts the most." -- Birdee Pruitt

Daily Tech Digest - November 22, 2020

It's time for banks to rethink how they secure customer information

To sum it up, banks and credit card companies really don't care to put too much effort into securing the accounts of customers. That's crazy, right?  The thing is, banks and credit card companies know they have a safety net to prevent them from crashing to the ground. That safety net is fraud insurance. When a customer of a bank has their account hacked or card number stolen, the institution is fairly confident that it will get its--I mean, the customer's--money back. But wait, the revelations go even deeper. These same institutions also admit (not to the public) that hackers simply have more resources than they do. Banks and credit card companies understand it's only a matter of time before a customer account is breached--these institutions deal with this daily. These companies also understand the futility of pouring too much investment into stopping hackers from doing their thing. After all, the second a bank invests millions into securing those accounts from ne'er-do-wells, the ne'er-do-wells will figure out how to get around the new security methods and protocols. From the bank's point of view, that's money wasted. It's that near-nihilistic point of view that causes customers no end of frustration, but it doesn't have to be that way.


The New Elements of Digital Transformation

Even as some companies are still implementing traditional automation approaches such as enterprise resource planning, manufacturing execution, and product life cycle management systems, other companies are moving beyond them to digitally reinvent operations. Amazon’s distribution centers deliver inventory to workers rather than sending workers to collect inventory. Rio Tinto, an Australian mining company, uses autonomous trucks, trains, and drilling machinery so that it can shift workers to less dangerous tasks, leading to higher productivity and better safety. In rethinking core process automation, advanced technologies are useful but not prerequisites. Asian Paints transformed itself from a maker of coatings in 13 regions in India to a provider of coatings, painting services, design services, and home renovations in 17 countries by first establishing a common core of digitized processes under an ERP system. This provided a foundation to build upon and a clean source of data to generate insights. Later, the company incorporated machine learning, robotics, augmented reality, and other technologies to digitally enable its expansion.


AI startup Graphcore says most of the world won't train AI, just distill it

Graphcore is known for building both custom chips to power AI, known as accelerators, and also full computer systems to house those chips, with specialized software. In Knowles's conception of the pecking order of deep learning, the handful of entities that can afford "thousands of yotta-FLOPS" of computing power -- the number ten raised to the 24th power -- are the ones that will build and train trillion-parameter neural network models that represent "universal" models of human knowledge. He offered the example of huge models that can encompass all of human languages, rather like OpenAI's GPT-3 natural language processing neural network. "There won't be many of those" kinds of entities, Knowles predicted. Companies in the market for AI computing equipment are already talking about projects underway to use one trillion parameters in neural networks. By contrast, the second order of entities, the ones that distill the trillion-parameter models, will require far less computing power to re-train the universal models to something specific to a domain. And the third entities, of course, even less power. Knowles was speaking to the audience of SC20, a supercomputing conference which takes place in a different city each year, but this year is being held as a virtual event given the COVID-19 pandemic.


5 Reasons for the Speedy Adoption of Blockchain Technology

Blockchain technology can only handle three to seven transactions per second, while the legacy transaction processing system is able to process tens of thousands of them every second. This led many observers to be unsure of the potential of blockchain as a viable option for large-scale applications. However, recent developments have resulted in promising way to close this performance gap and a new consensus mechanism is being developed. This mechanism is enabling participants (some of who are unknown to each other) to trust the validity of the transactions. While the performance may be sluggish and a lot of computational resources may be spent in the mechanism involving blockchain, the better performance is the key that is popularizing the use of the blockchain technology. Latest designs are aiming to reduce the time and energy intensive mining required to validate every transaction. Various blockchain-based applications are able to choose between performance, functionality, and security to suit what is most appropriate for the application. This consensus model is being especially appreciated in industries like auto-leasing, insurance, healthcare, supply chain management, trading, and more.


How next gen Internal Audit can play strategic role in risk management post-pandemic

The purpose of a business continuity plan is to ensure that the business is ready to survive a critical incident. It permits an instantaneous response to the crisis so as to shorten recovery time and mitigate the impact. This pandemic has conferred an unprecedented “critical incident” for the globe. With unknown reach and period, worldwide implications, and no base for accurate projections, we are very much into unchartered territories. Many organizations used to develop a disaster recovery plan and business continuity procedure that was rarely put to the test in a real crisis situation. With the arrival of newer risks e.g. cyber-attacks, data transfer confidentiality issues struggle with maintaining supply levels, workforce management, physical losses, operational disruptions, change of marketing platforms, increased volatility and interdependency of the global economy, etc. the traditionally accepted Business Continuity & Crisis Management Models are getting continuously & constructively challenged rapidly. Therefore, organizations need adequate planning resulting in immediate response, better decision-making, maximum recovery, effective communications, and sound contingency plans for various scenarios that may suddenly arise.


How to Build a Production Grade Workflow with SQL Modelling

A constructor creates a test query where a common table expression (CTE) represents each input mock data model, and any references to production models (identified using dbt’s ‘ref’ macro) are replaced by references to the corresponding CTE. Once you execute a query, you can compare the output to an expected result. In addition to an equality assertion, we extended our framework to support all expectations from the open-source Great Expectations library to provide more granular assertions and error messaging. The main downside to this framework is that it requires a roundtrip to the query engine to construct the test data model given a set of inputs. Even though the query itself is lightweight and processes only a handful of rows, these roundtrips to the engine add up. It becomes costly to run an entire test suite on each local or CI run. To solve this, we introduced tooling both in development and CI to run the minimal set of tests that could potentially break given the change. This was straightforward to implement with accuracy because of dbt’s lineage tracking support; we simply had to find all downstream models (direct and indirect) for each changed model and run their tests.


Google Services Weaponized to Bypass Security in Phishing, BEC Campaigns

For its part, Google stresses the company is taking every measure to keep malicious actors off their platforms. “We are deeply committed to protecting our users from phishing abuse across our services, and are continuously working on additional measures to block these types of attacks as methods evolve,” a Google spokesperson told Threatpost by email. The statement added that Google’s abuse policy prohibits phishing and emphasized that the company is aggressive in combating abuse. “We use proactive measures to prevent this abuse and users can report abuse on our platforms,” the statement said. “Google has strong measures in place to detect and block phishing abuse on our services.” Sambamoorthy told Threatpost that the security responsibility does not rest on Google alone and that organizations should not rely solely on Google’s security protections for their sensitive data. “Google faces a fundamental dilemma because what makes their services free and easy to use also lowers the bar for cybercriminals to build and launch effective phishing attacks,” he said. “It’s important to remember that Google is not an email security company — their primary responsibility is to deliver a functioning, performant email service.”


Democratize Data to Empower your Organization and Unleash More Value

Organizations, unsure whether they can trust their data, limit access, instead of empowering the whole enterprise to achieve new insights for practical uses. To drive new value—such as expanded customer marketing and increasing operational efficiencies—democratizing data demands building out a trusted, governed data marketplace, enabling mastered and curated data to drive your innovations that leapfrog the competition. To do this, trust assurance has become the critical enabler. But how to accomplish trust assurance? Trust Assurance Helps You Accelerate Reliable Results So, what is trust assurance, and how can data governance help accelerate it? If an organization is to convert data insights into value that drives new revenue, improves customer experience, and enables more efficient operations, the data needs controls to help ensure it’s both qualitative for reliable results as well as protected for appropriate, and compliant, use. According to IDC, we’re seeing a 61 percent compound annual growth rate (CAGR) in worldwide data at this moment—a rate of increase that will result in 175 zettabytes of data worldwide by 2025. 


DDoS mitigation strategies needed to maintain availability during pandemic

According to Graham-Cumming, enterprises should start the process of implementing mitigating measures by conducting thorough due diligence of their entire digital estate and its associated infrastructure, because that is what attackers are doing. “The reality is, particularly for the ransomware folks, these people are figuring out what in your organisation is worth attacking,” he says.“It might not be the front door, it might not be the website of the company as that might not be worth it – it might be a critical link to a datacentre where you’ve got a critical application running, so we see people doing reconnaissance to figure out what the best thing to attack is. “Do a survey of what you’ve got exposed to the internet, and that will give you a sense of where attackers might go. Then look at what really needs to be exposed to the internet and, if it does, there are services out there that can help.” This is backed up by Goulding at Nominet, who says that while most reasonably mature companies will have already considered DDoS mitigation, those that have not can start by identifying which assets they need to maintain availability for and where they are located.


Empathy: The glue we need to fix a fractured world

Our most difficult moments force us to contend with our vulnerability and our mortality, and we realize how much we need each other. We’ve seen this during the pandemic and the continued struggle for racial justice. There has been an enormous amount of suffering but also an intense desire to come together, and a lot of mutual aid and support. This painful moment has produced a lot of progress and clarity around our values. Yet modern life, especially in these pandemic times, makes it harder than ever to connect with each other, and this disconnectedness can erode our empathy. But we can fight back. We can work to empathize more effectively. The pandemic, the economic collapse associated with it, and the fight for racial justice have increased all sorts of feelings, including empathy, anger, intolerance, fear, and stress. A big question for the next two to five years is which tide will prevail. ... Another problem is that there’s tribalism within organizations, especially larger organizations and those that are trying to put different groups of people with different goals under a single tent. For instance, I’ve worked with companies that include both scientists and people who are trying to market the scientists’ work. 



Quote for the day:

"Superlative leaders are fully equipped to deliver in destiny; they locate eternally assigned destines." -- Anyaele Sam Chiyson

Daily Tech Digest - November 21, 2020

How phishing attacks are exploiting Google's own tools and services

Armorblox's co-founder and head of engineering, Arjun Sambamoorthy, explains that Google is a ripe target for exploitation due to the free and democratized nature of many of its services. Adopted by so many legitimate users, Google's open APIs, extensible integrations, and developer-friendly tools have also been co-opted by cybercriminals looking to defraud organizations and individuals. Specifically, attackers are using Google's own services to sneak past binary security filters that look for traffic based on keywords or URLs. ... cybercriminals spoof an organization's security administration team with an email telling the recipient that they've failed to receive some vital messages because of a storage quota issue. A link in the email asks the user to verify their information in order to resume email delivery. The link in the email leads to a phony login page hosted on Firebase, Google's mobile platform for creating apps, hosting files and images, and serving up user-generated content. This link goes through one redirection before landing on the Firebase page, confusing any security product that tries to follow the URL to its final location. As it's hosted by Google, the parent URL of the page will escape the notice of most security filters.


Women in Data: How Leaders Are Driving Success

Next-gen analytics have helped to shift perception and enable the business to accelerate the use of data, according to panelist Barb Latulippe, Sr. Director Enterprise Data at Edward Life Sciences, who emphasized the trend toward self-service in enterprise data management. The days of the business going to IT are gone—a data marketplace provides a better user experience. Coupled with an effort to increase data literacy throughout the enterprise, such data democratization empowers users to access the data they need themselves, thanks to a common data language. This trend was echoed by panelist Katie Meyers, senior vice president at Charles Schwab responsible for data sales and service technologies. A data leader for 25 years, Katie focused on the role cloud plays in enabling new data-driven capabilities. Katie emphasized that we’re living in a world where data grows faster than our ability to manage the infrastructure. By activating data science and artificial intelligence (AI), Charles Schwab can leverage automation and machine learning to enable both the technical and business sides of the organization to more effectively access and use data. 


Developer experience: an essential aspect of enterprise architecture

Code that provides the structure and resources to allow a developer to meet their objectives with a high degree of comfort and efficiency is indicative of a good developer experience. Code that is hard to understand, hard to use, fails to meet expectations and creates frustration for the developer is typical of a bad developer experience. Technology that offers a good developer experience allows a programmer to get up and running quickly with minimal frustration. A bad developer experience—one that is a neverending battle trying to figure out what the code is supposed to do and then actually getting it to work—costs time, money, and, in some cases, can increase developer turnover. When working with a company’s code is torturous enough, a talented developer who has the skills to work anywhere will do take one of their many other opportunities and leave. There is only so much friction users will tolerate. While providing a good developer experience is known to be essential as one gets closer to the user of a given software, many times, it gets overlooked at the architectural design level. However, this oversight is changing. Given the enormous demand for more software at faster rates of delivery, architects are paying attention.


ISP Security: Do We Expect Too Much?

"The typical Internet service provider is primarily focused on delivering reliable, predictable bandwidth to their customers," Crisler says. "They value connectivity and reliability above everything else. As such, if they need to make a trade-off decision between security and uptime, they will focus on uptime." To be fair, demand for speed and reliable connections was crushing many home ISPs in the early days of the pandemic. For some, it remains a serious strain. "In the early weeks of the pandemic, when people started using their residential connections at once, ISPs were faced with major outages as bandwidth oversubscription and increased botnet traffic created serious bottlenecks for people working at home," says Bogdan Botezatu, director of threat research and reporting at Bitdefender. ISPs' often aging and inadequately protected home hardware presents many security vulnerabilities as well. "Many home users rent network hardware from their ISP. These devices are exposed directly to the Internet but often lack basic security controls. For example, they rarely if ever receive updates and often leave services like Telnet open," says Art Sturdevant, VP of technical operations at Internet device search engine Censys. "And on devices that can be configured using a Web page, we often see self-signed certificates, a lack of TLS for login pages, and default credentials in use."


Can private data as a service unlock government data sharing?

Data as a service (DaaS), a scalable model where many analysts can access a shared data resource, is commonplace. However, privacy assurance about that data has not kept pace. Data breaches occur by the thousands each year, and insider threats to privacy are commonplace. De-identification of data can often be reversed and has little in the way of a principled security model. Data synthesis techniques can only model correlations across data attributes for unrealistically low-dimensional schemas. What is required to address the unique data privacy challenges that government agencies face is a privacy-focused service that protects data while retaining its utility to analysts: private data as a service (PDaaS). PDaaS can sit atop DaaS to protect subject privacy while retaining data utility to analysts. Some of the most compelling work to advance PDaaS can be found with projects funded by the Defense Advanced Research Projects Agency’s Brandeis Program, ... According to DARPA, “[t]he vision of the Brandeis program is to break the tension between: (a) maintaining privacy and (b) being able to tap into the huge value of data. Rather than having to balance between them, Brandeis aims to build a third option – enabling safe and predictable sharing of data in which privacy is preserved.”


How to Create High-Impact Development Teams

Today’s high-growth, high-scale organizations must have well-rounded tech teams in place -- teams that are engineered for success and longevity. However, the process of hiring for, training and building those teams requires careful planning. Tech leaders must ask themselves a series of questions throughout the process: Are we solving the right problem? Do we have the right people to solve these problems? Are we coaching and empowering our people to solve all aspects of the problem? Are we solving the problem the right way? Are we rewarding excellence? Is 1+1 at least adding up to 2 if not 3? ... When thinking of problems to solve for the customers -- don’t constrain yourself by the current resources. A poor path is to first think of solutions based on resource limitations and then find the problems that fit those solutions. An even worse path is to lose track of the problems and simply start implementing solutions because “someone” asked for it. Instead, insist on understanding the actual problems/pain points. Development teams who understand the problems often come back with alternate, and better, solutions than the initial proposed ones. 


Apstra arms SONiC support for enterprise network battles

“Apstra wants organizations to reliably deploy and operate SONiC with simplicity, which is achieved through validated automation...Apstra wants to abstract the switch OS complexity to present a consistent operational model across all switch OS options, including SONiC,” Zilakakis said. “Apstra wants to provide organizations with another enterprise switching solution to enable flexibility when making architecture and procurement decisions.” The company’s core Apstra Operating System (AOS), which supports SONIC-based network environments, was built from the ground up to support IBN. Once running it keeps a real-time repository of configuration, telemetry and validation information to constantly ensure the network is doing what the customer wants it to do. AOS includes automation features to provide consistent network and security policies for workloads across physical and virtual infrastructures. It also includes intent-based analytics to perform regular network checks to safeguard configurations. AOS is hardware agnostic and integrated to work with products from Cisco, Arista, Dell, Juniper, Microsoft and Nvidia/Cumulus.


New EU laws could erase its legacy of world-leading data protection

As the European Union finalises new digital-era laws, its legacy of world-leading privacy and data protection is at stake. Starting next week, the European Commission will kick off the introduction of landmark legislative proposals on data governance, digital market competition, and artificial intelligence. The discussions happening now and over the next few months have implications for the future of the General Data Protection Regulation and the rights this flagship law protects. With Google already (predictably) meddling in the debate, it is imperative that regulators understand what the pitfalls are and how to avoid them. ... The first new legislation out of the gate will be the Data Governance Act, which the European Commission is set to publish on November 24. According to Commissioner Thierry Breton, the new Data Strategy aims to ensure the EU “wins the battle of non-personal data” after losing the “race on personal data”. We strongly object to that narrative. While countries like the US have fostered the growth of privacy-invasive data harvesting business models that have led to repeated data breaches and scandals such as Cambridge Analytica, the EU stood against the tide, adopting strong data protection rules that put people before profits.


The journey to modern data management is paved with an inclusive edge-to-cloud Data Fabric

We want everything to be faster, and that’s what this Data Fabric approach gets for you. In the past, we’ve seen edge solutions deployed, but you weren’t processing a whole lot at the edge. You were pushing along all the data back to a central, core location -- and then doing something with that data. But we don’t have the time to do that anymore. Unless you can change the laws of physics -- last time I checked, they haven’t done that yet -- we’re bound by the speed of light for these networks. And so we need to keep as much data and systems as we can out locally at the edge. Yet we need to still take some of that information back to one central location so we can understand what’s happening across all the different locations. We still want to make the rearview reporting better globally for our business, as well as allow for more global model management. ... Typically, we see a lot of data silos still out there today with customers – and they’re getting worse. By worse, I mean they’re now all over the place between multiple cloud providers. I may use some of these cloud storage bucket systems from cloud vendor A, but I may use somebody else’s SQL databases from cloud vendor B, and those may end up having their own access methodologies and their own software development kits (SDKs).


Rebooting AI: Deep learning, meet knowledge graphs

"Most of the world's knowledge is imperfect in some way or another. But there's an enormous amount of knowledge that, say, a bright 10-year-old can just pick up for free, and we should have RDF be able to do that. Some examples are, first of all, Wikipedia, which says so much about how the world works. And if you have the kind of brain that a human does, you can read it and learn a lot from it. If you're a deep learning system, you can't get anything out of that at all, or hardly anything. Wikipedia is the stuff that's on the front of the house. On the back of the house are things like the semantic web that label web pages for other machines to use. There's all kinds of knowledge there, too. It's also being left on the floor by current approaches. The kinds of computers that we are dreaming of that can help us to, for example, put together medical literature or develop new technologies are going to have to be able to read that stuff. We're going to have to get to AI systems that can use the collective human knowledge that's expressed in language form and not just as a spreadsheet in order to really advance, in order to make the most sophisticated systems."



Quote for the day:

"To have long term success as a coach or in any position of leadership, you have to be obsessed in some way." -- Pat Riley

Daily Tech Digest - November 20, 2020

How software-defined storage (SDS) enables continuity of operations

Creating a new layer completely based on software in your infrastructure stack means that costs for hardware can be minimised, while boosting multi-cloud strategies. “Traditionally, regardless of how complex and well maintained your data centre, its fixed position was a weakness easily exploited or corrupted by theft, disaster or power-related issues,” said Ben Griffin, sales director at Computer Disposals Ltd. “It’s for precisely this reason that SDS is a reliable partner – offering a means of continuation should the worst happen. “Decoupling storage from hardware, as is the case with SDS, brings a huge range of benefits for the day-to-day duties of IT personnel. And, from a broader company-wide perspective, it enables simpler continuity through challenging periods by relying less on owned hardware and more on flexible, accessible and affordable multi-cloud environments. “One of the great attributes of SDS is scalability, and this, in turn, is often one of the principal means of business continuity. Should a business need to downsize in challenging times, with a view to reinvesting in personnel later down the line, SDS provides this ability with none of the usual challenges associated with managing a physical data centre.”


A perspective on security threats and trends, from inception to impact

The abuse of legitimate tools enables adversaries to stay under the radar while they move around the network until they are ready to launch the main part of the attack, such as ransomware. For nation-state-sponsored attackers, there is the additional benefit that using common tools makes attribution harder. In 2020, Sophos reported on the wide range of standard attack tools now being used by adversaries. “The abuse of everyday tools and techniques to disguise an active attack featured prominently in Sophos’ review of the threat landscape during 2020. This technique challenges traditional security approaches because the appearance of known tools doesn’t automatically trigger a red flag. This is where the rapidly growing field of human-led threat hunting and managed threat response really comes into its own,” said Wisniewski. “Human experts know the subtle anomalies and traces to look for, such as a legitimate tool being used at the wrong time or in the wrong place. To trained threat hunters or IT managers using endpoint detection and response (EDR) features, these signs are valuable tripwires that can alert security teams to a potential intruder and an attack underway.”


Evolution of Emotet: From Banking Trojan to Malware Distributor

Ever since its discovery in 2014—when Emotet was a standard credential stealer and banking Trojan, the malware has evolved into a modular, polymorphic platform for distributing other kinds of computer viruses. Being constantly under development, Emotet updates itself regularly to improve stealthiness, persistence, and add new spying capabilities. This notorious Trojan is one of the most frequently malicious programs found in the wild. Usually, it is a part of a phishing attack, email spam that infects PCs with malware and spreads among other computers in the network. ... In recent versions, a significant change in the strategy has happened. Emotet has turned into polymorphic malware, downloading other malicious programs to the infected computer and the whole network as well. It steals data, adapts to various detection systems, rents the infected hosts to other cybercriminals as a Malware-as-a-Service model. Since Emotet uses stolen emails to gain victims' trust, spam has consistently remained the primary delivery method for Emotet—making it convincing, highly successful, and dangerous.


Decentralised Development: Common Pitfalls and how VSM can Avoid Them

A value stream mapping exercise should involve all of the teams that would ever collaborate on a release. Bringing everyone together ensures that all parts of the process are being recognised and tracked on the map. Ideally, there should be two sessions, the first focused on building a map of the current value stream. This is essentially a list of every single action that is completed from start to finish in the delivery pipeline. It includes all of the governance tests that need to be conducted, how all of the individual actions relate to each other, and which actions cannot be completed until something else has been done first. It’s important to be very thorough during this process, and make sure that every action is accounted for. Once the entire map is complete, you are left with an accurate picture of everything that needs to be done as part of the release pipeline. Not surprisingly, most companies don’t have this visibility today, but it will be invaluable moving forward. For product managers in particular, having a concrete outline of all of the processes that are occurring gives them a clear sense of all the moving parts.


Now Available: Red Hat Ansible Automation Platform 1.2

The Ansible project is a remarkable open source project with hundreds of thousands of users encompassing a large community. Red Hat extends this community and open source developer model to innovate, experiment and incorporate feedback to satisfy our customer challenges and use cases. Red Hat Ansible Automation Platform transforms Ansible and many related open source projects into an enterprise grade, multi-organizational automation platform for mission-critical workloads. In modern IT infrastructure, automation is no longer a nice-to-have; it’s often now a requirement to run, operate and scale how everything is managed: including network, security, Linux, Windows, cloud and more. Ansible Automation Platform includes a RESTful API for seamless integration with existing IT tools and processes. The platform also includes a web UI with a push-button intuitive interface for novice users to consume and operate automation with safeguards. This includes Role Based Access Controls (RBAC) to help control who can automate what job on which equipment, as well as enterprise integrations with TACACS+, RADIUS, and Active Directory. Ansible Automation Platform also enables advanced workflows. 


How Cyberattacks Work

Cyberattacks have been increasing in number and complexity over the past several years, but given the prevalence of events, and signals that greater attacks could be on the horizon, it’s a good time to examine what goes into a cyberattack. Breaches can occur when a bad actor hacks into a corporate network to steal private data. They also occur when information is seized out of cloud-based infrastructure. Many people think that security breaches only happen to sizable corporations, but Verizon found that 43% of breaches affect small businesses. In fact, this was the largest cohort measured. And the damage such businesses experience is considerable — 60% go out of business within six months of an attack. Small businesses make appealing targets because their security is usually not as advanced as that encountered within large enterprises. Systems may be outdated, and bugs often go unpatched for lengthy periods. SMBs also tend to have fewer resources available to manage an attack, limiting their ability to detect, respond, and recover. Additionally, small businesses can serve as testing grounds for hackers to test their nefarious methods before releasing an attack on another, bigger fish.


Time to Rethink Your Enterprise Software Strategy?

The response to process and software changes depends on where you are in your digital transformation journey. Early adopters of digital transformation could be hailed as genius in hindsight. Those still in their journey are speeding up to make that last push to completion in case another round of pandemic, locusts, or other plagues circle the globe. Those followers and laggards who treated digital transformation as if it were a passing trend may find themselves the proverbial coyote riding their “Acme Digital Transformation Rocket” off the COVID cliff. But, thanks to technology (NOT from Acme), there is hope. As organizations, including major software vendors, moved to Agile frameworks to deliver software and implementations more quickly, a convergence of technologies and services fell into place. Cloud services have been around for a while, but the incredible push to move infrastructure to cloud platforms and software as a service (SaaS) has been nothing short of amazing. With the latest release of rapid deployment low-code/no-code tools from Salesforce, Microsoft, Amazon, and Google/Alphabet, the toolsets are now designed for two speeds: fast and faster. Changing the software and changing the processes are related, but two different paths.


The Fintech Future: Accelerating the AI & ML Journey

Fintechs across the world are dealing with the effects of Covid-19 and face an uphill challenge in containing the impact of it on the financial system and broader economy. With rising unemployment and stagnated economies, individuals and companies are struggling with debt, while the world in general is awash in credit risk. This has pushed operational resilience to the top of fintech CXOs’ agendas, requiring them to focus on systemic risks while continuing to deliver innovative digital services to customers. To make matters worse, criminals are exploiting vulnerabilities imposed by the shift to remote operations post-Covid-19, increasing the risk of fraud and cybercrime. For fintechs, building and maintaining robust defences has, therefore, become a critical priority. Organisations around the globe are forging new models to combat financial crime in collaboration with governments, regulators, and even other fintechs. The technological advances in data analytics, AI and machine learning (ML) have been driving fintechs’ response to the crisis, accelerating the automation journey many had already embarked on. Until recently, fintechs have used traditional methods of data analysis for various applications, including the detection of fraud and predicting defaults, that require complex and time-consuming investigations.


Managing Metadata: An Examination of Successful Approaches

Metadata turns critical ‘data’ into critical ‘information.’ Critical information is data + metadata that feeds Key Performance Indicators (KPIs). He recommends asking: “What will change with a better understanding of your data?” Getting people on board involves understanding how metadata can solve problems for end users while meeting company objectives. “We want to be in a position to say, ‘I do this and your life gets better.’” To have a greater impact, he said, avoid ‘data speak’ and engage with language that the business understands. For example, the business won’t ask for a ‘glossary.’ Instead they will ask for ‘a single view of the customer, integrated and aligned across business units.’ An added benefit of using accessible language is being perceived as helpful, rather than being seen as adding to the workload. ... When documenting the Information Architecture, Adams suggests focusing on how the information flows around the architecture of the organization, rather than focusing on specific systems. Start with the type of information and where it resides and denote broad applications and system boundaries. Include data shared with people outside the organization.


Meet the Microsoft Pluton processor – The security chip designed for the future of Windows PCs

The Microsoft Pluton design technology incorporates all of the learnings from delivering hardware root-of-trust-enabled devices to hundreds of millions of PCs. The Pluton design was introduced as part of the integrated hardware and OS security capabilities in the Xbox One console released in 2013 by Microsoft in partnership with AMD and also within Azure Sphere. The introduction of Microsoft’s IP technology directly into the CPU silicon helped guard against physical attacks, prevent the discovery of keys, and provide the ability to recover from software bugs. With the effectiveness of the initial Pluton design we’ve learned a lot about how to use hardware to mitigate a range of physical attacks. Now, we are taking what we learned from this to deliver on a chip-to-cloud security vision to bring even more security innovation to the future of Windows PCs. Azure Sphere leveraged a similar security approach to become the first IoT product to meet the “Seven properties of highly secure devices.” The shared Pluton root-of-trust technology will maximize the health and security of the entire Windows PC ecosystem by leveraging the security expertise and technologies from the companies involved.



Quote for the day:

"If we were a bit more tolerant of each other's weaknesses we'd be less alone." -- Juliette Binoche

Daily Tech Digest - November 19, 2020

Japan Is Using Robots As A Service To Fight Coronavirus And For Better Quality Of Life

Connected Robotics has developed a number of other food-related robots, including a machine that produces soft-serve ice cream, another that prepares deep-fried foods often sold at Japanese convenience stores, and yet another that cooks bacon and eggs for breakfast. Along the way, it was selected for the Japanese government’s J-Startup program, which highlights promising startups in Japan, and has raised over 950 million yen ($9.1 million) from investors including Global Brain, Sony Innovation Fund and 500 Startups Japan. Connected Robotics wants robots to do more than just prepare food in the kitchen. It is collaborating with the state-backed New Energy and Industrial Technology Development Organization (NEDO) to tackle the task of loading and unloading dishwashers. Under the project, one robot arm will prewash dishes and load them into a dishwashing machine, and another arm will store the clean dishes. The company aims to roll out the machine next spring, part of its goal to have 100 robot systems installed in Japan over the next two years. From that point, it wants to branch out overseas into regions such as Southeast Asia.


DevOps Chat: Adopting Agile + DevOps in Large Orgs

The first step towards creating a change is to acknowledge that you need to change, because once everyone acknowledges that you need to change you can start to think about, well, what sort of change do we want to have? And this has to be a highly collaborative effort. It’s not just your CXOs locking themselves in a conference room at a resort for two days and whiteboarding this out. This has to be a truly collaborative exercise across the entire organization. There may need to be some constraints that are in place, and what I’ve typically seen is a board of directors might say, “Look, this is where we want to be five years from now,” not necessarily saying how that needs to be achieved but saying this is where we want to be. That in combination with creating that sense of urgency says, “Look, if we need to hit this sort of a target we need to be able to develop software faster or close out outages quicker or have a more Agile procurement process or whatever else it might be,” and that’s when you start to identify these sort of strategic and tactical initiatives. Because you’re working collaboratively you’re dispersing that cognitive effort, if you will. So it’s not just somebody from one particular vantage point saying, “As a CFO I think we are giving away too much money so we’re going to be cost-cutting only.”


How empathy drives IT innovation at Thermo Fisher Scientific

The development of an instrument here has always involved an incredible amount of scientific innovation and industrial design. Over the last few years, we have had the added complexity of building software and IoT sensors right into the products. We also develop software that enables our customers’ lab operations. My digital engineering organization has agile capabilities that we can carry over to the development we are doing on customer software solutions. The product groups make the decisions about what solution to develop, but our agile teams, who have been developing software for years, now help the product teams with delivery. ... Now that cloud services are mature, we are all in. We are also very excited about artificial intelligence and its transformational potential, particularly in life sciences. We are putting AI into our customer service operations so that agents spend more time helping a customer and aren’t worrying about how quickly they can finish their service call. AI is also becoming very important for gene sequencing and diagnostics in drug manufacturing. We are only scratching the surface there, but by creating hybrid AI teams made up of both IT and product people, we avoid reinventing the wheel.


CAP Theorem, PACELC, and Microservices

These architectural decisions have impacts that extend beyond data centers and cloud providers. They impact how users interact with the system and what their expectations are. It's important in a system that is eventually consistent that users understand that when they issue a command and get a response, that doesn't necessarily mean their command completed, but rather that their command was received and is being processed. User interfaces should be constructed to set user expectations accordingly. For example, when you place an order from Amazon, you don't immediately get a response in your browser indicating the status of your order. The server doesn't process your order, charge your card, or check inventory. It simply returns letting you know you've successfully checked out. Meanwhile, a workflow has been kicked off in the form of a command that has been queued up. Other processes interact with the newly placed order, performing the necessary steps. At some point the order actually is processed, and yet another service sends an email confirming your order. This is the out-of-band response the server sends you, not through your browser, synchronously, but via email, eventually. And if there is a problem with your order, the service that checked out your cart on the web site doesn't deal with it.


How to Evolve and Scale Your DevOps Programs and Optimize Success

First, there is the fact that simple DevOps managerial topologies might be effective at integrating the workflows of small teams dedicated to the development of a particular software product, but they are far less so when applied to a firms’ total software output. Techniques such as normalized production processes make it easier to catch bugs for large firms, but an over-eager application of these techniques runs the risk of homogenizing otherwise highly adapted teams. Secondly, there is the challenge of inconsistency. Because, as we’ve explained above, DevOps techniques generally arise within small, forward-looking teams, they are generally also highly adapted to the particular needs of these same teams. This can mean that managers trying to standardize DevOps across an organization can be faced with dozens 9if not hundreds) of different approaches, many of which have proven to be very effective. Finally, there is a more basic problem – that of communication. Where DevOps works best, it facilitates close and continuous communication between operations and development staff. Achieving this level of communication not only involves that communications technologies and platforms are in place; it also demands that teams be small enough that they can genuinely talk to each other.


Majority of APAC firms pay up in ransomware attacks

The complexity of having to operate cloud architectures also had a significant impact on the organisation's ability to recover following a ransomware attack, according to Veritas. Some 44% of businesses with fewer than five cloud providers in their infrastructure needing fewer than five days to recover, compared to 12% with more than 20 providers doing likewise. And while 49% of businesses with fewer than five cloud providers could restore 90% or more of their data, only 39% of their peers running more than 20 cloud services were able to do likewise. In Singapore, 49% said their security had kept pace with their IT complexity. Their counterparts in India, at 55% were most confident amongst other in the region about their security measures keeping pace with their IT complexity. Just 31% in China said likewise, along with 36% in Japan, 39% in South Korea, and 43% in Australia.  With ransomware attacks expected to continue to increase amidst accelerated digital transformation efforts and the normalisation of remote work, enterprises in the region will need to ensure they can detect and recover from such attacks. Andy Ng, Veritas' Asia-Pacific vice president and managing director, underscored the security vendor's recommended three-step layered approach to detect, protect, and recover.


How to speed up malware analysis

The main part of the dynamic analysis is to use a sandbox. It is a tool for executing suspicious programs from untrusted sources in a safe environment for the host machine. There are different approaches to the analysis in sandboxes. They can be automated or interactive. Online automated sandboxes allow you to upload a sample and get a report about its behavior. This is a good solution especially compared to assembling and configuring a separate machine for these needs. Unfortunately, modern malicious programs can understand whether they are run on a virtual machine or a real computer. They require users to be active during execution. And you need to deploy your own virtual environment, install operation systems, and set software needed for dynamic analysis to intercept traffic, monitor file changes, etc. Moreover, changing settings to every file takes a lot of time and anyway, you can’t affect it directly. We should keep in mind that analysis doesn’t always follow the line and things may not work out as planned for this very sample. Finally, it’s lacking the speed we need, as we have to wait up to half an hour for the whole cycle of analysis to finish. All of these cons may cause damage to the security if an unusual sample remains undetected. Thankfully, now we have interactive sandboxes.


Graph Databases Gaining Enterprise-Ready Features

Enterprise graph database vendor TigerGraph recently unveiled the results of performance benchmark tests conducted on representative enterprise uses of its scalable application. Touted by the company as a comprehensive graph data management benchmark study, the tests used almost 5TB of raw data on a cluster of machines to show the performance of TigerGraph. The study used the Linked Data Benchmark Council Social Network Benchmark (LDBC SNB), which is a reference standard for evaluating graph technology performance with intensive analytical and transactional workloads. The results and performance numbers showed that graph databases can scale with real data, in real time, according to the vendor. TigerGraph claims it is the first industry vendor to report LDBC benchmark results at this scale. The data showed that TigerGraph can run deep-link OLAP queries on a graph of almost nine billion vertices (entities) and more than 60 billion edges (relationships), returning results in under a minute, according to the announcement. TigerGraph’s performance was measured with the LDBC SNB Benchmark scale-factor 10K dataset on a distributed cluster for the analysis. The implementation used TigerGraph’s GSQL query language, which were compiled and loaded into the database as stored procedures.


Solving the performance challenges of web-scale IT

Two other components to be considered when it comes to performance monitoring for web-scale IT are an in-memory database, and a strong testing protocol. Kieran Maloney, IT services director at Charles Taylor, explained: “Utilising an in-memory database and caching are ways to improve the performance of web-scale applications, particularly for things like real-time or near-time analytics. “The majority of cloud infrastructure service providers now offer PaaS services that include in-memory capabilities, which increase the speed at which data can be searched, accessed, aggregated and analysed – examples include Azure SQL, Amazon ElastiCache for Redis, Google Memorystore, and Oracle Cloud DB. “The other consideration for solving performance management is a testing approach for identifying how the actual performance is, and also determine whether there are specific aspects of the application that are not performing, or could be “tuned” to improve performance or the cost of operation. “There are a number of Application Performance Testing tools available that provide business and IT teams with real-time usage, performance and capacity analysis – allowing them to proactively respond and make interventions to improve performance, as required.”


Optimising IT spend in a turbulent economic climate

Having complete transparency and visibility of an organisations entire IT ecosystem is an essential first step to optimising costs. This includes having a full, holistic view across all solutions, whether they are on-premises or in the cloud. The reality, however, is that many businesses have a fragmented view over the technology applications within their organisation, which makes identifying inefficiencies extremely difficult. Even before the shift to remote work, the evolution of department-led technology purchasing had caused many IT teams to lose visibility of their technology estate, including accounting for what’s being used, how much and what tools are left inactive but still paid for. ... Once a clear view of all technology assets has been defined, IT teams can then start to assess the current usage and spend of the organisation. With many employees working from home, it is likely they will be using a variety of new tools to work effectively. Whilst it can be difficult to determine exactly what is being used and by who when many workers are remote, having this information is crucial to effectively reducing redundancies.



Quote for the day:

"Uncertainty is not an indication of poor leadership; it underscores the need for leadership." -- Andy Stanley