Daily Tech Digest - November 30, 2019

We’ve got to regulate the application of AI — not the tech itself


Another important factor that governments and businesses will need to be aware of will be in devising methods to prevent the rise of AI used with malicious intent, i.e. for hacking or fraudulent sales. Most cyber-experts predict that cyberattacks powered by AI will be one of the biggest challenges of the 2020s, which means that regulations and preventative measures should be implemented as with any other industry: designed specifically for the application. Stringent qualification processes will also need to be addressed for certain industries. For example, Broadway show producers have been driving ticket sales through an automated chatbot, with the show Wicked boasting ROI increases of up to 700 percent. This has also allowed producers to sell tickets for 20 percent higher than the average weekly price.  Regulations will need to address the fact that AI and bots have the potential to take advantage of consumers’ wallets, which means that policymakers will need to work closely with firms that are gradually beginning to rely on chatbots to make sure that consumer rights are not being breached.



How Smart Home Tech Is Shaking Up The Insurance Industry

Ring video doorbell
Through smart home devices, homeowners are able to remain connected to their property 24/7, whether at home, work or on holiday. In turn, this constant connectivity instils a psychological shift in householders, encouraging them to take a more proactive approach to home security and protection. ... For example, while water damage may not top the list of worries from homeowners, it can cost thousands of pounds to repair and is one of the most common types of domestic property damage claims. However, with a leak sensor installed, escaping water can be caught quickly and customers will even be alerted via a notification to their smartphone. This knowledge is critical, as homeowners are able to call out a plumber on the same day – at a fixed fee – and contain the damage. This proactivity benefits both sides. For insurers, responsible and safe homeowners pose less of a risk, resulting in lower premiums. It’s a win win all round. Moreover, the additional information gained from the steady stream of signals sent to the insurer from in-home sensors and monitors can allow claim handlers to remain better informed in the event of an incident.


Fintech Regulation Needs More Principles, Not More Rules


It is important to recognize that principles-based regulation is not a euphemism for “deregulation” or a “light-touch” approach—far from it. Principles-based regulation is a different way of achieving the same regulatory outcomes as rules-based regulation. But it simply does so in what is, in many cases, a more efficient and flexible manner. That flexibility also prevents subversion of those outcomes through the kind of loopholes that revealed the inherent vulnerability of rules-based regulation in the run up to the financial crisis. Of course, in practice, it is rare for to have either a purely principles-based or a purely rules-based regulation. Rather, they represent two ends of the regulatory spectrum. Every principles-based regulatory regime has some rules, and every rules-based regime has some element of principle. For this reason, we frequently see hybrid regulatory systems of principles and rules.


Singapore wants widespread AI use in smart nation drive


"Domestically, our private and public sectors will use AI decisively to generate economic gains and improve lives. Internationally, Singapore will be recognised as a global hub in innovating, piloting, test-bedding, deploying and scaling AI solutions for impact," said the SNDGO, which is part of the Prime Minister's Office. To kick off its efforts, the government identified five national projects that focused on key industry challenges, including intelligent freight planning in transport and logistics, chronic disease prediction and management in healthcare, and border clearance operations in national safety and security. These form part of nine sectors that have been earmarked for heightened deployment as AI is expected to generate high social and economic value for Singapore. These verticals include manufacturing, finance, cybersecurity, and government. The national AI strategy also outlined five key enablers that the government deemed essential in building a "vibrant and sustainable" ecosystem for AI innovation and adoption. A robust data architecture, for instance, would be necessary for the public and private sectors to manage and exchange information securely, so AI algorithms can have access to quality datasets for training and testing.


How To Thrive At Work: 10 Strategies Based On Brain Science

Brain science can help you thrive at work
In his book, The Shallows, Nicholas Carr demonstrates how our internet usage has rewired our brains. We think superficially, skimming, glancing and scanning rather than reading or processing more deeply. Cal Newport, in his book Deep Work, advocates for focusing, contemplating and concentrating. His contention is this distraction-free thinking has become increasingly rare and is a skill we must learn (or relearn). In fact, empathy—so critical to our humanity—is impossible without deeply considering others’ situations. And the ability to solve problems and develop ideas cannot happen effectively without depth of thought. Tell stories. While communicating facts tends to engage limited portions of the brain, hearing a story engages multiple parts of the brain. One study in particular, using an MRI found participants had greater understanding and retention of concepts based on the engagement of multiple parts of the brain. Other researchers, including Dr. Paul Zak, have demonstrated hearing stories that include conflicts and meaningful characters tend to engage us emotionally. The resulting release of oxytocin leads us to trust the messages and morals the story is trying to convey.


3 Reasons This Stock Is a Top Cybersecurity Pick

Hacker in a hoodie sitting with a laptop.
Check Point's research and development expenses increased 20% year over year while selling and marketing expenses rose nearly 10.5%. Both of these metrics outpaced the company's actual revenue growth. In fact, Check Point has stepped up its investment in both of these line items in the past year or so, and the positive impact is visible on the company's subscription growth. The company is now looking to get into lucrative cybersecurity niches as well. Check Point recently announced the acquisition of Internet of Things (IoT)-focused cybersecurity start-up Cymplify. Check Point will integrate Cymplify's expertise into its Infinity cybersecurity architecture so that clients can protect their IoT devices -- such as smart TVs, medical devices, and IP cameras -- against cyberattacks. This should open up a big growth opportunity for Check Point because according to IHS Markit, cybersecurity is the fastest-growing IoT niche. The firm predicts that the IoT data security market will grow from $3 billion in revenue this year to $7 billion in 2022 as more original equipment manufacturers (OEMs) move to secure their IoT devices.


5G radiation no worse than microwaves or baby monitors: Australian telcos

5g-towers-20180623205641.jpg
"When we've done our tests on our 5G network, they're typically 1,000 to 10,000 times less than what we get from other devices. So when you add all of that up together, it's all very low in terms of total emission. But you're finding that 5G is in fact a lot lower than many other devices we use in our everyday lives." Wood added there is no evidence for cancer or non-thermal effects from radio frequency EME. "There's some evidence for biological effects, but none of these are non-adverse," Wood told the committee. "So they've really looked at all of the research they need to set a safety standard, and in summary what they said is that, if you follow the guidelines, they're protective of all people, including children." On the issue of governmental revenue raising from its upcoming spectrum sale, Optus said it would be wrong of government to view it as a cash cow, as every dollar spent on spectrum is not used on creating networks. "Critically, in order to achieve the coverage and deployment required, 5G networks will require significant amounts of spectrum," the Singaporean-owned telco wrote.


How can businesses stop AI from going bad?

How can businesses stop AI from going bad? image
Starting from the very beginning of the process, CIO’s can help AI be “good” by ensuring that the data being used to create the algorithms is ethical and unbiased, itself. Gathering and using data from ethical sources significantly reduces the risk of harbouring toxic datasets which may infect systems with problematic biases further down the line. This is especially crucial for highly regulated industries, which will need to identify biases already present and remedy accordingly. Using insurance as an example, CIO’s should take care not to include data that heavily features one particular demographic, gender etc., which might augment averages and inform non-representative policies. Collecting a rich sample of ethical, GDPR compliant, representative data from consenting customers actually benefits the accuracy of the AI it powers, and it also reduces the work needed to “clean” it.


INNOPHYS Develops Muscle Suit for Physical Labor

INNOPHYS Develops Muscle Suit for Physical Labor Japanese Woman Carrying Load Crop
The suit can lift upwards of 30kg. While it won’t do the lifting on its own, it can take that weight off from its wearer. It offers support in the form of hydraulically-controlled artificial muscles which are housed in an aluminum backpack linked to the waist joints. The pack provides two axes of movement: one for bending at the waist and another for supporting the thighs. Controlling the suit can be done in two ways. The wearer can either blow into a tube or touch a control surface with their chin, thus creating a hands-free control system for the exoskeleton. The muscle suit is wrapped inside a custom, water-repellent bag. This protects the device from the elements and gives it a softer appearance. ... Many other Japanese companies have also taken the challenge of producing suits to assist in physical labor. Companies like HAL have already placed a stable foothold in the exoskeleton industry with their series of robotic suits. Nevertheless, the Muscle Suit is an awe-inspiring invention by this venture company from the Tokyo University of Science.



Yes—at least in some circumstances, both researchers said. Bordes’s group, for example, is creating a benchmark test that can be used to train a machine learning algorithm to automatically detect deepfakes. And Rossi said that, in some cases, A.I. could be used to highlight potential bias in models created by other artificial intelligence algorithms. While technology could produce useful tools for detecting—and even correcting—problems with A.I. software, both scientists emphasized people should not be lulled into complacency about the need for critical human judgment. “Addressing this issue is really a process,” Rossi told me. “When you deliver an A.I. system, you cannot just think about these issues at the time the product is ready to be deployed. Every design choice ... can bring unconscious bias.” You can read more about our discussion and watch a video here. ... “Yes, it is true that A.I. is only as good as the data it has been fed,” she said. But, she argued, this potentially gave people tremendous power.



Quote for the day:


"Whenever you see a successful business, someone once made a courageous decision." -- Peter F. Drucker


Daily Tech Digest - November 29, 2019

Cybersecurity: The web has a padlock problem - and your internet safety is at risk


Even now, encryption is sometimes discussed as if it's a bonus when using the internet, when it needs to become the standard way of doing things everywhere on the internet, Helme explained. "We need it to become so ingrained and embedded into everything that we do that it's boring and we don't need to talk about it because it shouldn't be special. Encryption should be the boring default that we don't need to talk about," he said. The security industry therefore needs to step up and help fix the issue, Helme argued, because by doing this, it takes the responsibility for deciding if a website is safe or not away from the user – something that will help make the internet safer for everyone. "We need to take encryption and make it the default, universal – it needs to be everywhere," he said, adding: "The lack of encryption on the web is actually a bug. And what we're doing now isn't adding a new feature for an improvement or a new thing: we're going back and fixing a mistake we made in the beginning."


Simplifying a data problem can ensure better buy-in

Vector wave lines flowing dynamic on blue background for concept of AI technology, digital,
Like many complex technical topics, an ability to share a relatable and very human story can engender action far more quickly than the most thoughtful technical arguments, or detailed integration diagrams combined. Similarly, an ability to find an impactful story can serve as a sanity check for your data-related projects. If you can't concisely articulate how gathering, sharing, or analyzing data can have a real impact on your business or its customers, then perhaps the project is not as valuable as you thought or will present an uphill battle for funding that may not have been obvious purely on the technical merits. Look for opportunities to condense your data-related endeavors into a simplified, relatable metric. Asking, "What if we had sales data a week earlier?" may more easily get funding for your data lake project than a 90-slide presentation about the merits of Hadoop. Similarly, you'll have a guiding objective for your data projects that's more readily understandable than a Gantt chart or status slide, and often is more successful at generating continued interest and excitement in the endeavor.


Will the future of work be ethical? Future leader perspectives


As a consumer of a lot of technology and as someone of the generation who has grown up with a phone in my hand, I’m aware my data is all over the internet. I’ve had conversations [with friends] about personal privacy and if I look around the classroom, most people have covers for the cameras on their computers. This generation is already aware [of] ethics whenever you’re talking about computing and the use of computers. About AI specifically, as someone who’s interested in the field and has been privileged to be able to take courses and do research projects about that, I’m hearing a lot about ethics with algorithms, whether that’s fake news or bias or about applying algorithms for social good. ... Today we had that debate about role or people’s jobs and robot taxes. That’s a very good debate to have, but it sometimes feeds a little bit into the AI hype and I think it may be a disgrace to society to try to pull back technology, which has been shown to have the power to save lives. It can be two transformations that are happening at the same time. One, that’s trying to bridge an inequality and is going to come in a lot of different and complicated solutions that happen at multiple levels and the second is allowing for a transformation in technology and AI.


Critical thinking, linking different lines of thought, and anticipating counter-arguments are all valuable debating skills that humans can practice and refine. While these skills are tougher for an AI to get good at since they often require deeper contextual understanding, AI does have a major edge over humans in absorbing and analyzing information. In the February debate, Project Debater used IBM’s cloud computing infrastructure to read hundreds of millions of documents and extract relevant details to construct an argument. This time around, Debater looked through 1,100 arguments for or against AI. The arguments were submitted to IBM by the public during the week prior to the debate, through a website set up for that purpose. Of the 1,100 submissions, the AI classified 570 as anti-AI, or of the opinion that the technology will bring more harm to humanity than good. 511 arguments were found to be pro-AI, and the rest were irrelevant to the topic at hand.


The power and promise of AI in the coming year and beyond

The power and promise of AI in the coming year and beyond
AI advancements are also happening rapidly in the area of sales productivity. Over the past year, the level at which businesses are utilising AI to grow their business has skyrocketed. It’s become standard for companies to use AI to improve predictive business software and to make more effective decisions. Using heavy duty machine learning analytics as a standard business practice is now widely accepted. Looking even farther down the road, there are those who believe that computers will be just as smart as humans in about two decades. I personally love reading about the subject of singularity and quantum computing. It’s fascinating to hear about its potential. Naturally, one could argue that humans might not want computing to become as smart as us. We’ve all watched movies centered-around apocalyptic devastation! But, in my opinion, AI stands to improve our lives in ways that we have yet to consider, especially at home. While AI is becoming commonplace in customer service and sales, we are a long way from having a robot cooking us dinner or cleaning our apartments.



CISOs and CMOs – Joined At The Hip in the Era of Big Data

Today, data is the lifeblood of business. Businesses have access to copious amounts of consumer data that can be leveraged to gain a better understanding of their market and customer base. To the CMO, this is a gold mine – more detailed insight into the wants, needs, habits and activities of their target demographics. These can result in initiatives with large scopes and larger budgets. On the flip side, the CISO sees the red flags and vulnerabilities that come along with this information. Privacy and security threats, technological limitations, and reputational risk are all on the radar. Commonly their response is to reel the scope back in to reduce risk and budget. As you may expect, this can result in internal friction as to who is truly responsible for the management of this data, making it more important than ever for the CISO and CMO to establish an effective working relationship. In order for your organization to best capitalize on the benefits of big data, the CISO and CMO must work together cohesively.
CROP - Businessman on blurred background using digital artificial intelligence icon hologram 3D rendering - image courtesy of Depositphotos.
With AI-based technology, it’s possible to increase the efficiency, objectivity and accuracy of work on vehicle production lines, while enhancing safety and enabling a higher volume of work with the same amount of resources. By detecting faults at an early stage, we can prevent a potential breakdown and reduce maintenance costs over the lifetime of the vehicle. These faults might include loose bolts, incorrectly routed cables, damage to paintwork or underinflated tyres, to name a few examples. What’s more, with manual checks, manufacturers not only risk overlooking faults on their vehicles, but also waste time that could be more productively allocated elsewhere in the factory. An intelligent AI-based system greatly enhances speed and efficiency, improving the flow of vehicles through and out of the plant. vWith all of this in mind, we expanded the breadth and capabilities of UVeye’s technology to other areas of a vehicle’s exterior, such as the tyres and bodywork.


No Blockchain to Rule Them All
The benefits of 5G are huge compared to 4G: it offers much higher data speeds (1-20 Gbit/s), much lower latency (1 ms), increased capacity as the network grows and it uses very high frequencies (3.5 GHz). The challenge with 5G is that it requires a lot more antennas than 4G networks. This is because 5G uses millimetre waves, which are a lot shorter than 4G wavelengths. As a result, it can carry a lot more data, but it means a much shorter range. As a result, to achieve a reliable 5G signal, you need a lot more 5G antennas. Placing these antennas will take time, so it will take another 2-3 years before we will have a broad, reliable 5G network. However, until then, enterprises are already building their own private 5G network to enable machine-to-machine communication. 5G will be vital for the 4th industrial revolution, and the first successful pilots have been done. Earlier this year, Ericsson, Vodafone and eGO launched the first 5G car factory in Germany. 


Palo Alto Networks Employee Data Breach Highlights Risks Posed by Third Party Vendors


Palo Alto Networks has declined to name the vendor concerned, or provide details of where on the internet the data appeared, but it has said that it has terminated the contract of their careless vendor. We would all like to think that the companies we work for would put robust demands on those external firms that provide products and services that they will be careful with our data - whether it be information about our products and services, intellectual property, customers, or employees. But however much you may demand in a contract that your providers have proper security measures and practices in place to reduce the chances of a breach or hack, you can never have 100% certainty that accidents and goofs won't happen. All you can do is limit the amount of sensitive data that your external providers have access to, ensuring that they can only access the information that they absolutely need to do their job and no more.


The Implications of Last Week's Exposure of 1.2B Records

Data enrichment is a legal but controversial practice. "The industry exists for the purpose of influencing people and giving you access to people you want to influence," says Farrow, who says he has heard both sides of the argument. On one hand, employees often use this data to ensure they're not sending mailers to or cold-calling the wrong people. They could get the same information themselves on Facebook or LinkedIn; data aggregators speed up the process. At the same time, it "feels like an intrusion on our privacy," he says. Cybercriminals can use this leaked data to influence victims to their advantage. A leak like this gives attackers access to organized and meaningful information, as opposed to a broad data dump. It forces those affected to think twice about who they trust — about whether a message is legitimate or malicious. Further, there is a difference between this data leak and other security breaches in which credit card numbers or passwords are stolen.



Quote for the day:


"There is no 'one' way to be a perfect leader, but there are a million ways to be a good one." -- Mark W. Boyer


Daily Tech Digest - November, 28, 2019

Cutting Cybersecurity Budgets In A Time of Growing Threats

uncaptioned
Greater spending on cybersecurity products hasn't entailed a better organizational security posture. Despite the millions of dollars spent by organizations year after year, the average cost of a cyberattack jumped by 50% between 2018 and 2019, hitting $4.6 million per incident. The percentage of cyberattacks that cost $10 million or more nearly doubled to 13% over the same period. Enterprises are using a diverse array of endpoint agents, including decryption, AV/AM and EDR. The use of multiple security products may, in fact, weaken an organization’s security position, whereby the more agents an endpoint has, the greater the probability it will get breached. This wide deployment makes it difficult to standardize a specific test to measure security and safety without sacrificing speed. Buying more cybersecurity tools tends to plunge enterprises into a costly cycle of spending more time and resources on security solutions without experiencing any parallel increase in security. However, in a mad chicken-and-egg pursuit, this trend of spending more on security products persists due to the rising costs of a security breach.



Digital transformation: Business modernization requires a new mindset

A lot of executives actually want to share their frustrations, and one of the frustrations, especially with more, let's just say, legacy-oriented organizations, I'll hear about millennials all the time. And then also the coming of centennials. In that they do want to work differently, they do think differently, and infrastructures, and also models, don't necessarily support that way of thinking and way of working. The consumerization of technology, it hasn't just affected millennials or the younger workforce, it's affected all of us. I think, anybody who has a smartphone or uses social media, or has ordered an Uber or Lyft, or DoorDash, or Postmates, you name it, we have, as human beings, radically transformed. Our brains have radically transformed as we use more of these technologies, we're multitasking, we're doing a million things. Employees get something like 200 notifications during their work day, just from their phone and social and email. So a lot of the way that we have to think about work has to change. We have to think bigger than the millennial workforce.


Hotel front desks are now a hotbed for hackers


First spotted in 2015 but appearing to be most active this year, RevengeHotels has struck at least 20 hotels in quick succession. The threat actors focus on hotels, hostels, and hospitality & tourism companies. While the majority of the RevengeHotels campaign takes place in Brazil, infections have also been detected in Argentina, Bolivia, Chile, Costa Rica, France, Italy, Mexico, Portugal, Spain, Thailand, and Turkey. The threat group deploys a range of custom Trojans in order to steal guest credit card data from infected hotel systems as well as financial information sent from third-party booking websites such as Booking.com. The attack chain begins with a phishing email sent to a hospitality organization. Professionally-written and making use of domain typo-squatting to appear legitimate, the researchers say the messages are detailed and generally impersonate real companies.  These messages contain malicious Word, Excel or PDF documents, some of which will exploit CVE-2017-0199, a Microsoft Office RCE vulnerability patched in 2017.


Regaining ROI by reducing cloud complexity

Illustration of a woman in a suit hopping across clouds in a blue sky
“The first thing is admitting that there’s an issue, which is a tough thing to do,” Linthicum acknowledges. “It essentially requires creating an ad hoc organization to get things back on track and simplified, whether that’s hiring outside specialists, or doing it internally. “The good thing about that is typically you can get 10 times ROI over a two-year period if you spend the time on reducing complexity,” he says. Even with that incentive, reducing complexity involves a cultural change: shifting to a proactive, innovative, and more thoughtful culture, which many organizations are having trouble moving towards, he warned. The most effective way to do that is really retraining, replacing, or revamping. “That’s going to be a difficult thing for most organizations,” Linthicum says. “I’ve worked with existing companies that had issues like this, and I find it was the hardest problem to solve. But it’s something that has to be solved before we can get to the proactivity, before we can get to using technology as a force multiplier, before we can get to the points of innovation.”


Top 5 SD-WAN Takeaways for 2019
Auto failover, redundancy, simplified management, and cost savings topped the list of factors driving SD-WAN adoption, according to Avant Communications’ SD-WAN report. “It is Avant’s belief that SD-WAN will continue to make ongoing incursions into the higher-end enterprise, beginning at remote offices and other edges of the network, and then reaching steadily closer toward the core,” the report reads. One of the biggest promises made by many SD-WAN vendors is that the technology will reduce costs by shifting bandwidth off of — and in some cases eliminating the need for — expensive MPLS connections. And while this can be true, with more than half of companies surveyed in the aforementioned Avant report indicating that cost savings over MPLS was a key concern, the majority were still split on whether to keep or replace their MPLS connections in favor of SD-WAN and broadband internet. Roughly 40% of those surveyed said they planned to use a hybrid solution that combines the two.


Autonomous systems, aerial robotics and Game of Drones

Now, automation has basically enabled a level of productivity that you see today. But automation is very fragile, inflexible, expensive… it’s very cumbersome. Once you set them up and when everything is working well, it’s fantastic, and that is what we live with today. You know, autonomous systems, we think, can actually make that a lot easier. Now the broad industry is really still oriented toward automation. So we have to bring that industry over slowly into this autonomous world. And what’s interesting is, while these folks are experts in mechanical engineering and operations research and, you know, all those kind of important capabilities and logistics, they don’t know AI very much.  ... They don’t know how to create horizontal tool chains which enable efficient development and operations of these type of systems. So that’s the expertise we bring. I’d add one more point to it, is that the places we are seeing autonomous systems being built, like autonomous driving, they’re actually building it in a very, very vertical way.


How Machine Learning Enhances Performance Engineering and Testing


During testing, there are numerous signs that an application is producing a performance anomaly, such as delayed response time, increased latency, hanging, freezing, or crashing systems, and decreased throughput. The root cause of these issues can be traced to any number of sources, including operator errors, hardware/software failures, over- or under-provisioning of resources, or unexpected interactions between system components in different locations. There are three types of performance anomalies that performance testing experts look out for. ... Machine learning can be used to help determine statistical models of "normal" behavior in a piece of software. They are also invaluable for predicting future values and comparing them against the values being collected in real-time, which means they are constantly redefining what "normal" behavior entails. A great advantage of machine learning algorithms is that they learn over time. When new data is received, the model can adapt automatically and help define what "normal" is month-to-month or week-to-week.


How Microsoft is using hardware to secure firmware

microsoft-secured-core-pcs.jpg
"Given the increase in firmware attacks we've seen in the last three years alone, the goal was to remove firmware as a trusted component of the boot process, so we're preventing these kinds of advanced firmware attacks," Dave Weston, director of OS security at Microsoft, told TechRepublic. The first line of the Windows boot loader on Secured-core PCs puts the CPU into a new security state where, instead of accepting the measurements made during Secure Boot, even though they're in the TPM, it goes back and revalidates the measurement. If they don't match, the PC doesn't boot and goes into BitLocker recovery mode instead. If you're managing the PC via Intune, it also sends a signal to the service that the device can't be trusted and shouldn't be allowed to connect to your network. "These PCs use the latest silicon from AMD, Intel, and Qualcomm that have the Trusted Platform Module 2.0 and Dynamic Root of Trust (DRTM) built in. The root of trust is a set of functions in the trusted computing module that is always trusted by a computer's OS and embedded in the device," Weston explains.



Not a single investment deal worth $100 million or more has been signed with an all-women team over the past four years, and only 7% of such deals went to mixed teams in 2019.  That's still a slight improvement on the previous year, when every single mega-round went to teams led exclusively by men. Sarah Nöckel, investment associate at VC firm Dawn Capital, told ZDNet: "Europe is lagging behind on diversity. In general, there is still an ongoing unconscious bias towards women. There needs to be a lot more education to change mentalities." The issue is not that women are absent from the tech space. Out of 1,200 European tech founders that were surveyed in the report, nearly a quarter identified as women.  As it dug further, the report also found that women and men are almost equally qualified for science and engineering careers. In fact in some countries, like Lithuania, the number of women who are scientists and engineers surpasses that of men. Women can and do found tech companies, therefore; the problem is rather that they then struggle to secure enough capital to develop their projects.


"Security campaigns do not work," says infosec professor Adam Joinson


The researchers' conclusions are based on a case study they performed with a large engineering services firm, based in the UK and employing more than 30,000 people. They found that - "whether we were talking to security practitioners or whether we were talking to employees" - security was not seen as something that supported the business; instead, it was perceived as a block. "In fact, they would see it as almost an adversary of employees," trying to catch and sanction workers for security breaches. One of the reasons for this was a misalignment between security policies and processes, and the lack of tools provided for employees to do their jobs. As part of an engineering firm, employees often had to deal with "massive" files from architects and similar, but the company limited emails to a 15MB attachment limit and did not allow workers use USB sticks. Cloud storage, in one particular case, was banned by a client's security policies. "Effectively, security stopped them from doing the core function of their role."



Quote for the day:


"Don't necessarily avoid sharp edges. Occasionally they are necessary to leadership." -- Donald Rumsfeld


Daily Tech Digest - November 27, 2019

10 Predictions How AI Will Improve Cybersecurity In 2020

10 Predictions How AI Will Improve Cybersecurity In 2020
Nicko van Someren, Ph.D. and Chief Technology Officer at Absolute Software, observes that “Keeping machines up to date is an IT management job, but it's a security outcome. Knowing what devices should be on my network is an IT management problem, but it has a security outcome. And knowing what's going on and what processes are running and what's consuming network bandwidth is an IT management problem, but it's a security outcome. I don't see these as distinct activities so much as seeing them as multiple facets of the same problem space, accelerating in 2020 as more enterprises choose greater resiliency to secure endpoints.” ... Josh Johnston, Director of AI at Kount, predicts that “the average consumer will realize that passwords are not providing enough account protection and that every account they have is vulnerable. Captcha won’t be reliable either, because while it can tell if someone is a bot, it can’t confirm that the person attempting to log in is the account holder. AI can recognize a returning user. AI will be key in protecting the entire customer journey, from account creation to account takeover, to a payment transaction. ...”


hero-image.jpg
Wolfram Language has limitations, and has been described by some users as better suited to solving a wide range of predetermined tasks, rather than being used to build software. It also seems there is still a way to go for Wolfram Language – it didn't, for example, feature in the IEEE's recent list of top programming languages. Wolfram has said that Wolfram Language is not just a language for telling computers what to do, but a way for both computers and humans to represent computational ways of thinking about things. Of late Wolfram has been more bold in how he talks about Wolfram Language, describing it as a "computational language" that could even help bridge the gulf between ourselves and future non-human intelligences, be they artificial intelligence (AI) or extraterrestrial. As esoteric a pursuit as it might seem, Wolfram believes the need for this lingua franca is timely, as machine-learning systems increasingly make decisions about our lives -- whether that's screening loan applications today or maybe even choosing whether to kill people tomorrow.


Tech jobs: These are the skills hiring managers are looking for now


CompTIA noted that the technology workforce, in particular, has been under the microscope for its lack of diversity. Diversity in tech staffing is likely to improve due to continuing pressure, the association said, but "fully diverse and inclusive environments still lie further in the future". A wide range of research and anecdotal examples proves that there's still much work to do in achieving equity, from data on wage gaps to the makeup of executive teams to ongoing reports of abusive behaviour, CompTIA said. Although 30% of companies feel that there has been significant improvement in the diversity of the tech workforce over the past two years, previous CompTIA research shows that "sentiment tends to skew more positive than reality on this topic." "The trend may be heading in the right direction, but the chasm was so wide that it will take significant time and intentional changes to close," said CompTIA, noting that there is a long list of potential actions that could improve the situation. Flexible work arrangements, including the physical environment, can create more opportunities and a more welcoming atmosphere, especially if there is a hard look at how the existing arrangements unintentionally create barriers, the association said.


AI Is The Link Between Big Data & Persons-Level Measurement

To highlight the shortcomings of big data from a measurement perspective, we conducted an analysis in the U.S. earlier this year that compared set-top box data with set-top box data that we calibrated with Nielsen panel data. The analysis found that the uncalibrated data is inherently biased and underrepresents minority audiences. That’s not to say, however, that big data has no value. Quite the opposite. But it does need to be grounded in a foundational truth set. That’s where our panels and artificial intelligence (AI) come into play. Our panel data—the key to persons-level measurement—is the perfect truth set for training big data. Through the application of AI, we use big data to dramatically broaden our measurement capabilities while preserving quality and representativeness. Today, AI is integral in our measurement methodologies. For example, it played a pivotal role in the development of our enhanced measurement capabilities for local TV markets, which combines the scale of big data (return path data {RPD} from TV sets) with fully representative in-market panel data.


GDPR Data Regulations & Commercial Fines


The public and private sector are both impacted, although government agencies have more leeway across GDPR in general due to requirements to retain and use data to deliver services to citizens. In terms of what best practice should be in dealing with a request, the advice from the UK’s Information Commissioner’s Office is that there should be a policy for recording all “subject access requests” and that based on Recital 59 of the GDPR, organisations “provide means for requests to be made electronically, especially where personal data are processed by electronic means.” This process will start with an access request form but when it comes to identity, the guidance is unclear. A number of organisations are asking for a similar set of documents that most banks require to open an account which includes a “proof of identity” such as a passport, photo driving license or birth certificate along with a “proof of address” such as a utility bill, bank statement or credit card statement. This requirement to verify from copies or scans of electronic documents is a major weakness in this process. 


Non-functional
Simply said, a non-functional requirement is a specification that describes the system’s operation capabilities and constraints that enhance its functionality. These may be speed, security, reliability, etc. We’ve already covered different types of software requirements, but this time we’ll focus on non-functional ones, and how to approach and document them. If you’ve ever dealt with non-functional requirements, you may know that different sources and guides use different terminology. For instance, the ISO/IEC 25000 standards framework defines non-functional requirements as system quality and software quality requirements. BABOK, one of the main knowledge sources for business analysts, suggests the term non-functional requirements (NFR), which is currently the most common definition. Nevertheless, these designations consider the same type of matter – the requirements that describe operational qualities rather than a behavior of the product. The list of them also varies depending on the source.


The Road to 2030 Must Be Circular


What gets exciting, is when you can find the perfect material match in someone else’s waste. Carbon fiber is a great example. Turns out computers use a similar grade carbon fiber as airplanes. So we reclaim aerospace material for Latitude, our commercial notebook line. To date, Dell has prevented more than 2 million pounds of carbon fiber from ending up in landfills. And in this case, the benefits go far beyond the environment. We’ve partnered with Carbon Conversions, a start-up based in South Carolina with a mission to reclaim and recycle carbon fiber. Carbon Conversions has redesigned and reengineered the papermaking process to produce carbon fiber non-woven fabrics, bringing new growth to an area historically impacted by overseas manufacturing. Finding more partners like Carbon Conversions will be important. It will also be important to increase our own recycling streams dramatically (i.e. you all have a role to play too). We must make it as easy as possible for you to recycle.


Bringing Business and IT Together, Part II: Organizational Alignment

COA is similar to other continuous improvement processes such as continuous quality improvement (CQI) and continuous process improvement (CPI). Just as CQI and CPI demand structure and metrics, so too does COA. Continuous improvement is evolutionary and incremental. It is manageable only when understood as a set of interconnected components that can be identified and measured. The COA Framework illustrated in Figure 1 provides the necessary structure. This three-dimensional structure associates the core elements of COA – those of organizational alignment and working relationships – with the activities of continuous improvement. The framework identifies the components that can be managed, measured, and modified to improve the overall alignment of business and technology organizations. ... Organization-to-organization relations are ideally structured and business-like. Conversely, person-to-person relationships are best when unstructured and friendly. Team-to-team relationships seek a balance between the two extremes.


VMware doubles up on Kubernetes play


Many of our large customers have Kubernetes clusters on vSphere, Amazon EC2 and sometimes bare metal. These are managed by different teams, making it difficult to manage and control everything. That was a problem we wanted to solve. Then comes the next question on how we can help customers build and deploy new applications. Historically, we’ve relied on Pivotal as a partner to help customers modernise their applications. While Pivotal Cloud Foundry is a great platform, Pivotal last year decided to use Kubernetes as the default runtime for their developer platform. Meanwhile, Spring Boot was becoming the de facto way by which people built microservices. So, we felt that by bringing Pivotal into the family, we could offer a very comprehensive solution to help customers build, run and manage their modern applications.


Using Kanban with Overbård to Manage Development of Red Hat JBoss EAP

Red Hat JBoss EAP (Enterprise Application Platform) has become a very complex product. As a result, planning EAP releases is also increasingly complicated. In one extreme case of the team working on the next major release while developing features for the previous minor release, the planning for that major release was ongoing for 14 months with the requirements constantly changing. However, spending more effort on planning didn't improve the end result; it didn't make us any smarter or more accurate. We'd rather spend more time doing stuff rather than talking about it. That was a major problem. In addition, there were cases in which requirements could be misunderstood or miscommunicated and we found that out late in the cycle. We had to find a way to collectively iterate over a requirement and make sure everyone understood what was to be done. In some cases we could go as far as implementing a proof-of-concept before we would be certain we fully understood the problem and the proposed solution.



Quote for the day:


"Inspired leaders move a business beyond problems into opportunities." -- Dr. Abraham Zaleznik


Daily Tech Digest - November 26, 2019

Exploit kits, or EKs, are web-based applications hosted by cyber-criminals. EK operators usually buy web traffic from malvertising campaigns or botnet operators. Traffic from malicious ads or hacked websites is sent to an EK's so-called "gate" where the EK operator selects only users with specific browsers or Adobe Flash versions and redirects these possible targets to a "landing page." Here is where the EK runs an exploit -- hence the name exploit kit -- and uses a browser or Flash vulnerability to plant and execute malware on a user's computer. But in a report released last week, Malwarebytes researchers say EK operators are changing their tactics. Instead of relying on dropping malware on disk and then executing the malware, at least three of the nine currently active EKs are now using fileless attacks. A fileless attack relies on loading the malicious code inside the computer's RAM, without leaving any traces on disk. Fileless malware has been around for more than half a decade, but this is the first time EKs are broadly adopting the technique.


Samsung adds two modems to help enable wider 5G rollout


"Samsung has tapped its leadership in semiconductor and network technology–and combined it with its expertise in 5G research and development–to introduce one of the industry's first SoC 5G New Radio modems: the S8600 and S9100," Johnston wrote. ASICs based System-on-a-Chip (SoC) product designs have become popular because they are more power efficient and have increased operating frequency capabilities, addressing the high-volume, mass production requirements that the industry is now demanding. "These new modems support two architectural options for operators. The S8600 powers Samsung's Digital Unit in separated radio-digital configurations for both 4G and 5G, while the S9100 powers Samsung's 5G integrated Access Unit," he added in his blog post about the new modems. Johnston added that most companies are opting for more power-conscious circuits that are permanent and application-specific, as opposed to circuitry that needs to be programmed or reconfigured. The new Samsung tools will help support 5G networks that are easier to enable, smaller in size and more efficient in how they use power, he said.



The Impact of Cloud Computing on the Insurance Industry

the cloud computing in insurance
Companies that use cloud systems greatly reduce the cost of purchasing hardware and software, thanks to on-demand and pay-per-use optics. They no longer have to buy local servers and data centers, which require specialized personnel to manage and maintain, and which take up physical space and consume electricity 24 hours a day, 7 days a week. And, since most services are provided on-demand, you can have access to abundant computing resources quickly, easily, and with the flexibility your business needs, and without an expensive hardware or software investment. All of this is in favor of optimizing performance and internal processes, also because, by hosting platforms, software, and databases remotely, you’re able to free up memory and computing power on individual machines within the organization. Optimization and efficiency also apply to the production of documents, such as policies, forms, and contracts of various kinds.


T-Mobile data breach affects more than 1 million customers


Few details of the breach have been made public, other than the fact that it was a cyber attack and that approximately 1.5% of T-Mobile’s 75 million customers were affected – about 1.1 million. T-Mobile added that the suspicious activity was initially spotted at the beginning of November, with criminal hackers accessing the information of prepaid wireless account holders. Although the organisation promptly reported the incident to the authorities, it has waited until now to inform customers and the public – presumably to ensure it had all the facts straight. There are few things worse than announcing the details of a data breach only to later find that things are much worse than you initially thought. This happens all too often, with organisations facing an initial backlash, then adding fuel to the fire with more bad news. Because the breach occurred in the US rather than the EU, it isn’t subject to the GDPR (General Data Protection Regulation), which would have required T-Mobile to inform customers within 72 hours of learning about it.


Why the IT4IT™ Standard is Key to Driving Business Value for CIOs


The IT4IT standard provides the CIO with a holistic overview on what his organization is doing well, what needs improvement, as well as highlighting how to improve upon the gaps across the business. Three use cases that support transformation that the IT4IT standard helps accelerate are re-architecting to co-create strategy with the business; rationalizing the application portfolio to reduce waste and free up funds for innovation programs; and driving automation by analysizing and selecting integration points for automation to improve the quality and speed of product and service delivery. The pressure to continually innovate and adopt the most effective solutions is likely to remain in today’s business landscape. But in order to create real value, today’s CIO must not only focus on innovation but on empowering the IT system to work as a competitive driver. They must think holistically and prioritize the management of IT processes to meet the demands of customers, increased competition as well as a changing business climate.


Adoption of Cloud-Native Architecture, Part 1: Architecture Evolution and Maturity

Software design practices like DDD and EIP have been available since 2003 or so and some teams then had been developing applications as modular services, but traditional infrastructure like heavyweight J2EE application servers for Java applications and IIS for .NET applications didn't help with modular deployments. With the emergence of cloud hosting and especially PaaS offerings like Heroku and Cloud Foundry, the developer community had everything it needed for true modular deployment and scalable business apps. This gave rise to the microservices evolution. Microservices offered the possibility of fine-grained, reusable functional and non-functional services. Microservices became more popular in 2013 - 2014. They are powerful, and enable smaller teams to own the full-cycle development of specific business and technical capabilities. Developers can deploy or upgrade code at any time without adversely impacting the other parts of the systems.


Why your CEO’s personal risk taking matters


People expect CEOs to be risk takers, which makes sense given the nature of the job. That belief may be why corporate boards have been relatively forgiving of the kind of eccentric, grandiose, and sometimes dangerous behavior that the media laps up — and that the public and investors question when it is exposed. After all, it matches the “risk seeker” stereotype. But the #MeToo movement and the occasionally egregious behavior of bubble-economy CEOs suggests that times are changing. Boards and shareholders want to be confident not only that CEOs are comfortable taking business risks, but that they have good judgment about which risks to pursue and when to take a pass. “CEOs meaningfully outscore other executives in embracing risk, while still scoring within an optimal range,” the executive search firm Russell Reynolds concluded in a 2016 study based on an analysis (pdf) of psychometric profiles of more than 6,000 CEOs. The best-in-class CEOs also score high on judgment and low on self-promotion; they project a collected demeanor.


The top technologies that enabled digital transformation this decade


Forrester recently said that enterprises across the world are increasingly turning to automation for a variety of tasks that used to be handled by humans. This is changing the workforce on a fundamental level, prompting fears in the next decade of mass job losses. But the field is also making enterprises better in a variety of concrete ways. Dangerous, time-consuming jobs at factories are increasingly being done by an army of robots, keeping people away from positions that have historically been damaging to their health. This has even bled into other fields like customer service, where many companies now use automated systems to respond to basic questions and complaints from consumers. Part of what's spurring the increase in automation is the advancement of artificial intelligence (AI), which is equipping robots and machines with a wider set of capabilities. Enterprises are using AI for everything from security to human resources, allowing computers to handle tasks that have become costly or redundant. While fears of automation and AI are very real, recent studies have shown that people actually like the introduction of automation and are generally happy computers or robots can handle menial tasks.


State police: We've been testing Spot robot dogs for use in dangerous situations


As per the agreement, MSP's bomb squad wanted to evaluate Spot in "law-enforcement applications, particularly remote inspection of potentially dangerous environments which may contain suspects and ordinances". The loan of Spot was uncovered by the American Civil Liberties Union (ACLU) of Massachusetts, which filed a public records request shortly after discovering a Facebook post by the Massachusetts State Police about an event on July 30 where it would explore the use of robotics in law-enforcement operations. An MSP spokesperson told WBUR that Spot was used as a "mobile remote observation device" that provided police with images of suspicious devices or potentially dangerous situations, such as where an armed suspect might be hiding. "Robot technology is a valuable tool for law enforcement because of its ability to provide situational awareness of potentially dangerous environments," state police spokesman David Procopio wrote. Spot has a 360-degree camera, crash protection, and can work tough environments. It has a top speed of 3mph and can carry a payload of 14kg, or 31lb.


Looking into an intelligent cloud future

Looking into an intelligent cloud future
Self-balancing deployment models. Now we have public clouds, private clouds, traditional on-premises systems, edge-based computing, and more, and all these platforms can run systems and store data. The platforms will have many more capabilities in 10 years, and thus the core question becomes, What do you run, where? Hopefully, we’ll have self-migrating and self-balancing workloads figured out by next decade. Core enabling technology will determine where workloads and data sets should reside and move them there using automated back-end systems. This means that when you deploy an application workload on any type of system, the workload will understand what resources are available to it and self-migrate to the most optimal available platform. Criteria for the platform of choice will include lowest costs, fastest performance, and location closest to the application and data consumers. Punitive security automation. Hackers are getting more creative about how they attack systems in the public clouds. Right now, public cloud security is better than traditional system security, so hackers still focus on traditional systems as easy prey.



Quote for the day:


"Education makes people difficult to drive, but easy to lead; impossible to enslave, but easy to govern." -- Lorn Brougham


Daily Tech Digest - November 25, 2019

Avoiding the pitfalls of operating a honeypot

honey jar dripper
Operators of honeypots sometimes desire to trick the hacker into downloading phone-home and other technologies for purposes of identifying the hacker and/or better tracking their movements. Understand that downloading programming and other technology onto someone’s systems or attempting to access their systems without their knowledge or consent almost certainly violates state and federal anti-hacking laws – even if done in the context of cyber security. Penalties for these activities can be substantial and harsh. Never engage in such activities without the involvement and direction of law enforcement. ... Except for interactions with law enforcement, uses of personally identifiable information should be strictly avoided. Only aggregated or de-identified information should be used, particularly in the context of any published reports or statistics regarding operation of the honeypot. ... The law regarding entrapment is complicated, but if someone creates a situation intended solely to snare a wrongdoer, there is the potential for an argument this constitutes entrapment. In such a case, law enforcement may decline to take action on information gained from the honeypot.


Exploit code published for dangerous Apache Solr remote code execution flaw

Apache Solr
At the time it was reported, the Apache Solr team didn't see the issue as a big deal, and developers thought an attacker could only access (useless) Solr monitoring data, and nothing else. Things turned out to be much worse when, on October 30, a user published proof-of-concept code on GitHub showing how an attacker could abuse the very same issue for "remote code execution" (RCE) attacks. The proof-of-concept code used the exposed 8983 port to enable support for Apache Velocity templates on the Solr server and then used this second feature to upload and run malicious code. A second, more refined proof-of-concept code was published online two days later, making attacks even easier to execute. It was only after the publication of this code that the Solr team realized how dangerous this bug really was. On November 15, they issued an updated security advisory. In its updated alert, the Solr team recommended that Solr admins set the ENABLE_REMOTE_JMX_OPTS option in the solr.in.sh config file to "false" on every Solr node and then restart Solr.



Stateful Serverless: Long-Running Workflows with Durable Functions

There are a few reasons the workload doesn’t appear to be a good fit for Azure Functions at first glance. It runs relatively long (the example was just part of the game; an entire game may take hours or days). In addition, it requires state to keep track of the game in progress. Azure Functions by nature are stateless. They are designed to be quickly run self-contained transactions. Any concept of state must be managed using cache, storage, or database. If only the function could be suspended while waiting for asynchronous actions to complete and maintain its state when resumed. The Durable Task Framework is an open source library that was written to manage state and control flow for long-running workflows. Durable Functions build on the framework to provide the same support for serverless functions. In addition to facilitating potential cost savings for longer running workflows, it opens a new set of patterns and possibilities for serverless applications. To illustrate these patterns, I created the Durable Dungeon. This article is based on a presentation I first gave at NDC Oslo.


The Edge of Test Automation: DevTestOps and DevSecOps

On the edge
DevTestOps allows developers, testers, and operation engineers to work together in a similar environment. Apart from running test cases, DevTestOps also involves writing test scripts, automation, manual, and exploratory testing. In the past few years, DevOps and automation testing strategies have received a lot of appreciation because teams were able to develop and deliver products in the minimum time possible. But, many organizations soon realized that without continuous testing, DevOps provide an incomplete delivery of software that might be full of bugs and issues. And that’s why DevTestOps was introduced. Now, DevTestOps is growing in popularity because it improves the relationship between the team members involved in a software development process. It not only helps in faster delivery of products but also provides high-quality software. And when the software is released, automated test cases are already stored in it for future releases.


Q&A with Tyler Treat on Microservice Observability

A common misstep I see is companies chasing tooling in hopes that it will solve all of their problems. "If we get just one more tool, things will get better." Similarly, seeking a "single pane of glass" is usually a fool’s errand. In reality, what the tools do is provide different lenses through which to view things. The composite of these is what matters, and there isn’t a single tool that solves all problems. But while tools are valuable, they aren’t the end of the story. As with most things, it starts with culture. You have to promote a culture of observability. If teams aren’t treating instrumentation as a first-class concern in their systems, no amount of tooling will help. Worse yet, if teams aren’t actually on-call for the systems they ship to production, there is no incentive for them to instrument at all. This leads to another common mistake, which is organizations simply renaming an Operations team to an Observability team. This is akin to renaming your Ops engineers to DevOps engineers thinking it will flip some switch. 


8 ways to prepare your data center for AI’s power draw

2 data center servers
Existing data centers might be able to handle AI computational workloads but in a reduced fashion, says Steve Conway, senior research vice president for Hyperion Research. Many, if not most, workloads can be operated at half or quarter precision rather than 64-bit double precision. “For some problems, half precision is fine,” Conway says. “Run it at lower resolution, with less data. Or with less science in it.” Double-precision floating point calculations are primarily needed in scientific research, which is often done at the molecular level. Double precision is not typically used in AI training or inference on deep learning models because it is not needed. Even Nvidia advocates for use of single- and half-precision calculations in deep neural networks. AI will be a part of your business but not all, and that should be reflected in your data center. “The new facilities that are being built are contemplating allocating some portion of their facilities to higher power usage,” says Doug Hollidge, a partner with Five 9s Digital, which builds and operates data centers. “You’re not going to put all of your facilities to higher density because there are other apps that have lower draw.”


Kubernetes meets the real world

Kubernetes meets the real world
Kubernetes is enabling enterprises of all sizes to improve their developer velocity, nimbly deploy and scale applications, and modernize their technology stacks. For example, the online retailer Ocado, which has been delivering fresh groceries to UK households since 2000, has built its own technology platform to manage logistics and warehouses. In 2017, the company decided to start migrating its Docker containers to Kubernetes, taking its first application into production in the summer of 2017 on its own private cloud. The big benefits of this shift for Ocado and others have been much quicker time-to-market and more efficient use of computing resources. At the same time, Kubernetes adopters also tend to cite the same drawback: The learning curve is steep, and although the technology makes life easier for developers in the long run, it doesn’t make life less complex. Here are some examples of large global companies running Kubernetes in production, how they got there, and what they have learned along the way.


HP to Xerox: We don't need you, you're a mess


The HP Board of Directors has reviewed and considered your November 21 letter, which has provided no new information beyond your November 5 letter. We reiterate that we reject Xerox's proposal as it significantly undervalues HP. Additionally, it is highly conditional and uncertain. In particular, there continues to be uncertainty regarding Xerox's ability to raise the cash portion of the proposed consideration and concerns regarding the prudence of the resulting outsized debt burden on the value of the combined company's stock even if the financing were obtained. Consequently, your proposal does not constitute a basis for due diligence or negotiation. We believe it is important to emphasize that we are not dependent on a Xerox combination. We have great confidence in our strategy and the numerous opportunities available to HP to drive sustainable long-term value, including the deployment of our strong balance sheet for increased share repurchases of our significantly undervalued stock and for value-creating M&A.


A new era of cyber warfare: Russia’s Sandworm shows “we are all Ukraine” on the internet

Cyber warfare  >  Russian missile launcher / Russian flag / binary code
This was “the kind of destructive act on the power grid we've never seen before, but we've always dreaded.” Even more concerning, “what happens in Ukraine we'll assume will happen to the rest of us too because Russia is using it as a test lab for cyberwar. That cyberwar will sooner or later spill out to the West,” Greenberg said. “When you make predictions like this, you don't really want them to come true.” Sandworm’s adversarial attacks did spill out to the West in its next big attack, the NotPetya malware, which swept across continents in June 2017 causing untold damage in Europe and the United States, but mostly in Ukraine. NotPetya, took down “300 Ukrainian companies and 22 banks, four hospitals that I'm aware of, multiple airports, pretty much every government agency. It was a kind of a carpet bombing of the Ukrainian internet, but it did immediately spread to the rest of the world fulfilling [my] prediction far more quickly than I would have ever wanted it to,” Greenberg said. The enormous financial costs of NotPetya are still unknown, but for companies that have put a price tag on the attack, the figures are staggering. 


Lessons Learned in Performance Testing


To remind ourselves, throughput is basically counting the number of operations done per some period of time (a typical example is operations per second). Latency, also known as response time, is the time from the start of the execution of the operation to receiving the answer. These two basic metrics of system performance are usually connected to each other. In a non-parallel system, latency is actually an inverse of throughput and vice versa. This is very intuitive - if I do 10 operations per second, one operation is (on average) taking 1/10 second. If I do more operations in one second, the single operation has to take less time. Intuitive. However, this intuition can easily break in a parallel system. As an example, just consider adding another request handling thread to the webserver. You’re not shortening the single operation time, hence latency stays (at best) the same, however, you double the throughput. From the example above, it’s clear that throughput and latency are essentially two different metrics of a system. Thus, we have to test them separately.



Quote for the day:


"Becoming a leader is synonymous with becoming yourself. It is precisely that simple, and it is also that difficult." -- Warren G. Bennis