Daily Tech Digest - November, 28, 2019

Cutting Cybersecurity Budgets In A Time of Growing Threats

uncaptioned
Greater spending on cybersecurity products hasn't entailed a better organizational security posture. Despite the millions of dollars spent by organizations year after year, the average cost of a cyberattack jumped by 50% between 2018 and 2019, hitting $4.6 million per incident. The percentage of cyberattacks that cost $10 million or more nearly doubled to 13% over the same period. Enterprises are using a diverse array of endpoint agents, including decryption, AV/AM and EDR. The use of multiple security products may, in fact, weaken an organization’s security position, whereby the more agents an endpoint has, the greater the probability it will get breached. This wide deployment makes it difficult to standardize a specific test to measure security and safety without sacrificing speed. Buying more cybersecurity tools tends to plunge enterprises into a costly cycle of spending more time and resources on security solutions without experiencing any parallel increase in security. However, in a mad chicken-and-egg pursuit, this trend of spending more on security products persists due to the rising costs of a security breach.



Digital transformation: Business modernization requires a new mindset

A lot of executives actually want to share their frustrations, and one of the frustrations, especially with more, let's just say, legacy-oriented organizations, I'll hear about millennials all the time. And then also the coming of centennials. In that they do want to work differently, they do think differently, and infrastructures, and also models, don't necessarily support that way of thinking and way of working. The consumerization of technology, it hasn't just affected millennials or the younger workforce, it's affected all of us. I think, anybody who has a smartphone or uses social media, or has ordered an Uber or Lyft, or DoorDash, or Postmates, you name it, we have, as human beings, radically transformed. Our brains have radically transformed as we use more of these technologies, we're multitasking, we're doing a million things. Employees get something like 200 notifications during their work day, just from their phone and social and email. So a lot of the way that we have to think about work has to change. We have to think bigger than the millennial workforce.


Hotel front desks are now a hotbed for hackers


First spotted in 2015 but appearing to be most active this year, RevengeHotels has struck at least 20 hotels in quick succession. The threat actors focus on hotels, hostels, and hospitality & tourism companies. While the majority of the RevengeHotels campaign takes place in Brazil, infections have also been detected in Argentina, Bolivia, Chile, Costa Rica, France, Italy, Mexico, Portugal, Spain, Thailand, and Turkey. The threat group deploys a range of custom Trojans in order to steal guest credit card data from infected hotel systems as well as financial information sent from third-party booking websites such as Booking.com. The attack chain begins with a phishing email sent to a hospitality organization. Professionally-written and making use of domain typo-squatting to appear legitimate, the researchers say the messages are detailed and generally impersonate real companies.  These messages contain malicious Word, Excel or PDF documents, some of which will exploit CVE-2017-0199, a Microsoft Office RCE vulnerability patched in 2017.


Regaining ROI by reducing cloud complexity

Illustration of a woman in a suit hopping across clouds in a blue sky
“The first thing is admitting that there’s an issue, which is a tough thing to do,” Linthicum acknowledges. “It essentially requires creating an ad hoc organization to get things back on track and simplified, whether that’s hiring outside specialists, or doing it internally. “The good thing about that is typically you can get 10 times ROI over a two-year period if you spend the time on reducing complexity,” he says. Even with that incentive, reducing complexity involves a cultural change: shifting to a proactive, innovative, and more thoughtful culture, which many organizations are having trouble moving towards, he warned. The most effective way to do that is really retraining, replacing, or revamping. “That’s going to be a difficult thing for most organizations,” Linthicum says. “I’ve worked with existing companies that had issues like this, and I find it was the hardest problem to solve. But it’s something that has to be solved before we can get to the proactivity, before we can get to using technology as a force multiplier, before we can get to the points of innovation.”


Top 5 SD-WAN Takeaways for 2019
Auto failover, redundancy, simplified management, and cost savings topped the list of factors driving SD-WAN adoption, according to Avant Communications’ SD-WAN report. “It is Avant’s belief that SD-WAN will continue to make ongoing incursions into the higher-end enterprise, beginning at remote offices and other edges of the network, and then reaching steadily closer toward the core,” the report reads. One of the biggest promises made by many SD-WAN vendors is that the technology will reduce costs by shifting bandwidth off of — and in some cases eliminating the need for — expensive MPLS connections. And while this can be true, with more than half of companies surveyed in the aforementioned Avant report indicating that cost savings over MPLS was a key concern, the majority were still split on whether to keep or replace their MPLS connections in favor of SD-WAN and broadband internet. Roughly 40% of those surveyed said they planned to use a hybrid solution that combines the two.


Autonomous systems, aerial robotics and Game of Drones

Now, automation has basically enabled a level of productivity that you see today. But automation is very fragile, inflexible, expensive… it’s very cumbersome. Once you set them up and when everything is working well, it’s fantastic, and that is what we live with today. You know, autonomous systems, we think, can actually make that a lot easier. Now the broad industry is really still oriented toward automation. So we have to bring that industry over slowly into this autonomous world. And what’s interesting is, while these folks are experts in mechanical engineering and operations research and, you know, all those kind of important capabilities and logistics, they don’t know AI very much.  ... They don’t know how to create horizontal tool chains which enable efficient development and operations of these type of systems. So that’s the expertise we bring. I’d add one more point to it, is that the places we are seeing autonomous systems being built, like autonomous driving, they’re actually building it in a very, very vertical way.


How Machine Learning Enhances Performance Engineering and Testing


During testing, there are numerous signs that an application is producing a performance anomaly, such as delayed response time, increased latency, hanging, freezing, or crashing systems, and decreased throughput. The root cause of these issues can be traced to any number of sources, including operator errors, hardware/software failures, over- or under-provisioning of resources, or unexpected interactions between system components in different locations. There are three types of performance anomalies that performance testing experts look out for. ... Machine learning can be used to help determine statistical models of "normal" behavior in a piece of software. They are also invaluable for predicting future values and comparing them against the values being collected in real-time, which means they are constantly redefining what "normal" behavior entails. A great advantage of machine learning algorithms is that they learn over time. When new data is received, the model can adapt automatically and help define what "normal" is month-to-month or week-to-week.


How Microsoft is using hardware to secure firmware

microsoft-secured-core-pcs.jpg
"Given the increase in firmware attacks we've seen in the last three years alone, the goal was to remove firmware as a trusted component of the boot process, so we're preventing these kinds of advanced firmware attacks," Dave Weston, director of OS security at Microsoft, told TechRepublic. The first line of the Windows boot loader on Secured-core PCs puts the CPU into a new security state where, instead of accepting the measurements made during Secure Boot, even though they're in the TPM, it goes back and revalidates the measurement. If they don't match, the PC doesn't boot and goes into BitLocker recovery mode instead. If you're managing the PC via Intune, it also sends a signal to the service that the device can't be trusted and shouldn't be allowed to connect to your network. "These PCs use the latest silicon from AMD, Intel, and Qualcomm that have the Trusted Platform Module 2.0 and Dynamic Root of Trust (DRTM) built in. The root of trust is a set of functions in the trusted computing module that is always trusted by a computer's OS and embedded in the device," Weston explains.



Not a single investment deal worth $100 million or more has been signed with an all-women team over the past four years, and only 7% of such deals went to mixed teams in 2019.  That's still a slight improvement on the previous year, when every single mega-round went to teams led exclusively by men. Sarah Nöckel, investment associate at VC firm Dawn Capital, told ZDNet: "Europe is lagging behind on diversity. In general, there is still an ongoing unconscious bias towards women. There needs to be a lot more education to change mentalities." The issue is not that women are absent from the tech space. Out of 1,200 European tech founders that were surveyed in the report, nearly a quarter identified as women.  As it dug further, the report also found that women and men are almost equally qualified for science and engineering careers. In fact in some countries, like Lithuania, the number of women who are scientists and engineers surpasses that of men. Women can and do found tech companies, therefore; the problem is rather that they then struggle to secure enough capital to develop their projects.


"Security campaigns do not work," says infosec professor Adam Joinson


The researchers' conclusions are based on a case study they performed with a large engineering services firm, based in the UK and employing more than 30,000 people. They found that - "whether we were talking to security practitioners or whether we were talking to employees" - security was not seen as something that supported the business; instead, it was perceived as a block. "In fact, they would see it as almost an adversary of employees," trying to catch and sanction workers for security breaches. One of the reasons for this was a misalignment between security policies and processes, and the lack of tools provided for employees to do their jobs. As part of an engineering firm, employees often had to deal with "massive" files from architects and similar, but the company limited emails to a 15MB attachment limit and did not allow workers use USB sticks. Cloud storage, in one particular case, was banned by a client's security policies. "Effectively, security stopped them from doing the core function of their role."



Quote for the day:


"Don't necessarily avoid sharp edges. Occasionally they are necessary to leadership." -- Donald Rumsfeld


Daily Tech Digest - November 27, 2019

10 Predictions How AI Will Improve Cybersecurity In 2020

10 Predictions How AI Will Improve Cybersecurity In 2020
Nicko van Someren, Ph.D. and Chief Technology Officer at Absolute Software, observes that “Keeping machines up to date is an IT management job, but it's a security outcome. Knowing what devices should be on my network is an IT management problem, but it has a security outcome. And knowing what's going on and what processes are running and what's consuming network bandwidth is an IT management problem, but it's a security outcome. I don't see these as distinct activities so much as seeing them as multiple facets of the same problem space, accelerating in 2020 as more enterprises choose greater resiliency to secure endpoints.” ... Josh Johnston, Director of AI at Kount, predicts that “the average consumer will realize that passwords are not providing enough account protection and that every account they have is vulnerable. Captcha won’t be reliable either, because while it can tell if someone is a bot, it can’t confirm that the person attempting to log in is the account holder. AI can recognize a returning user. AI will be key in protecting the entire customer journey, from account creation to account takeover, to a payment transaction. ...”


hero-image.jpg
Wolfram Language has limitations, and has been described by some users as better suited to solving a wide range of predetermined tasks, rather than being used to build software. It also seems there is still a way to go for Wolfram Language – it didn't, for example, feature in the IEEE's recent list of top programming languages. Wolfram has said that Wolfram Language is not just a language for telling computers what to do, but a way for both computers and humans to represent computational ways of thinking about things. Of late Wolfram has been more bold in how he talks about Wolfram Language, describing it as a "computational language" that could even help bridge the gulf between ourselves and future non-human intelligences, be they artificial intelligence (AI) or extraterrestrial. As esoteric a pursuit as it might seem, Wolfram believes the need for this lingua franca is timely, as machine-learning systems increasingly make decisions about our lives -- whether that's screening loan applications today or maybe even choosing whether to kill people tomorrow.


Tech jobs: These are the skills hiring managers are looking for now


CompTIA noted that the technology workforce, in particular, has been under the microscope for its lack of diversity. Diversity in tech staffing is likely to improve due to continuing pressure, the association said, but "fully diverse and inclusive environments still lie further in the future". A wide range of research and anecdotal examples proves that there's still much work to do in achieving equity, from data on wage gaps to the makeup of executive teams to ongoing reports of abusive behaviour, CompTIA said. Although 30% of companies feel that there has been significant improvement in the diversity of the tech workforce over the past two years, previous CompTIA research shows that "sentiment tends to skew more positive than reality on this topic." "The trend may be heading in the right direction, but the chasm was so wide that it will take significant time and intentional changes to close," said CompTIA, noting that there is a long list of potential actions that could improve the situation. Flexible work arrangements, including the physical environment, can create more opportunities and a more welcoming atmosphere, especially if there is a hard look at how the existing arrangements unintentionally create barriers, the association said.


AI Is The Link Between Big Data & Persons-Level Measurement

To highlight the shortcomings of big data from a measurement perspective, we conducted an analysis in the U.S. earlier this year that compared set-top box data with set-top box data that we calibrated with Nielsen panel data. The analysis found that the uncalibrated data is inherently biased and underrepresents minority audiences. That’s not to say, however, that big data has no value. Quite the opposite. But it does need to be grounded in a foundational truth set. That’s where our panels and artificial intelligence (AI) come into play. Our panel data—the key to persons-level measurement—is the perfect truth set for training big data. Through the application of AI, we use big data to dramatically broaden our measurement capabilities while preserving quality and representativeness. Today, AI is integral in our measurement methodologies. For example, it played a pivotal role in the development of our enhanced measurement capabilities for local TV markets, which combines the scale of big data (return path data {RPD} from TV sets) with fully representative in-market panel data.


GDPR Data Regulations & Commercial Fines


The public and private sector are both impacted, although government agencies have more leeway across GDPR in general due to requirements to retain and use data to deliver services to citizens. In terms of what best practice should be in dealing with a request, the advice from the UK’s Information Commissioner’s Office is that there should be a policy for recording all “subject access requests” and that based on Recital 59 of the GDPR, organisations “provide means for requests to be made electronically, especially where personal data are processed by electronic means.” This process will start with an access request form but when it comes to identity, the guidance is unclear. A number of organisations are asking for a similar set of documents that most banks require to open an account which includes a “proof of identity” such as a passport, photo driving license or birth certificate along with a “proof of address” such as a utility bill, bank statement or credit card statement. This requirement to verify from copies or scans of electronic documents is a major weakness in this process. 


Non-functional
Simply said, a non-functional requirement is a specification that describes the system’s operation capabilities and constraints that enhance its functionality. These may be speed, security, reliability, etc. We’ve already covered different types of software requirements, but this time we’ll focus on non-functional ones, and how to approach and document them. If you’ve ever dealt with non-functional requirements, you may know that different sources and guides use different terminology. For instance, the ISO/IEC 25000 standards framework defines non-functional requirements as system quality and software quality requirements. BABOK, one of the main knowledge sources for business analysts, suggests the term non-functional requirements (NFR), which is currently the most common definition. Nevertheless, these designations consider the same type of matter – the requirements that describe operational qualities rather than a behavior of the product. The list of them also varies depending on the source.


The Road to 2030 Must Be Circular


What gets exciting, is when you can find the perfect material match in someone else’s waste. Carbon fiber is a great example. Turns out computers use a similar grade carbon fiber as airplanes. So we reclaim aerospace material for Latitude, our commercial notebook line. To date, Dell has prevented more than 2 million pounds of carbon fiber from ending up in landfills. And in this case, the benefits go far beyond the environment. We’ve partnered with Carbon Conversions, a start-up based in South Carolina with a mission to reclaim and recycle carbon fiber. Carbon Conversions has redesigned and reengineered the papermaking process to produce carbon fiber non-woven fabrics, bringing new growth to an area historically impacted by overseas manufacturing. Finding more partners like Carbon Conversions will be important. It will also be important to increase our own recycling streams dramatically (i.e. you all have a role to play too). We must make it as easy as possible for you to recycle.


Bringing Business and IT Together, Part II: Organizational Alignment

COA is similar to other continuous improvement processes such as continuous quality improvement (CQI) and continuous process improvement (CPI). Just as CQI and CPI demand structure and metrics, so too does COA. Continuous improvement is evolutionary and incremental. It is manageable only when understood as a set of interconnected components that can be identified and measured. The COA Framework illustrated in Figure 1 provides the necessary structure. This three-dimensional structure associates the core elements of COA – those of organizational alignment and working relationships – with the activities of continuous improvement. The framework identifies the components that can be managed, measured, and modified to improve the overall alignment of business and technology organizations. ... Organization-to-organization relations are ideally structured and business-like. Conversely, person-to-person relationships are best when unstructured and friendly. Team-to-team relationships seek a balance between the two extremes.


VMware doubles up on Kubernetes play


Many of our large customers have Kubernetes clusters on vSphere, Amazon EC2 and sometimes bare metal. These are managed by different teams, making it difficult to manage and control everything. That was a problem we wanted to solve. Then comes the next question on how we can help customers build and deploy new applications. Historically, we’ve relied on Pivotal as a partner to help customers modernise their applications. While Pivotal Cloud Foundry is a great platform, Pivotal last year decided to use Kubernetes as the default runtime for their developer platform. Meanwhile, Spring Boot was becoming the de facto way by which people built microservices. So, we felt that by bringing Pivotal into the family, we could offer a very comprehensive solution to help customers build, run and manage their modern applications.


Using Kanban with Overbård to Manage Development of Red Hat JBoss EAP

Red Hat JBoss EAP (Enterprise Application Platform) has become a very complex product. As a result, planning EAP releases is also increasingly complicated. In one extreme case of the team working on the next major release while developing features for the previous minor release, the planning for that major release was ongoing for 14 months with the requirements constantly changing. However, spending more effort on planning didn't improve the end result; it didn't make us any smarter or more accurate. We'd rather spend more time doing stuff rather than talking about it. That was a major problem. In addition, there were cases in which requirements could be misunderstood or miscommunicated and we found that out late in the cycle. We had to find a way to collectively iterate over a requirement and make sure everyone understood what was to be done. In some cases we could go as far as implementing a proof-of-concept before we would be certain we fully understood the problem and the proposed solution.



Quote for the day:


"Inspired leaders move a business beyond problems into opportunities." -- Dr. Abraham Zaleznik


Daily Tech Digest - November 26, 2019

Exploit kits, or EKs, are web-based applications hosted by cyber-criminals. EK operators usually buy web traffic from malvertising campaigns or botnet operators. Traffic from malicious ads or hacked websites is sent to an EK's so-called "gate" where the EK operator selects only users with specific browsers or Adobe Flash versions and redirects these possible targets to a "landing page." Here is where the EK runs an exploit -- hence the name exploit kit -- and uses a browser or Flash vulnerability to plant and execute malware on a user's computer. But in a report released last week, Malwarebytes researchers say EK operators are changing their tactics. Instead of relying on dropping malware on disk and then executing the malware, at least three of the nine currently active EKs are now using fileless attacks. A fileless attack relies on loading the malicious code inside the computer's RAM, without leaving any traces on disk. Fileless malware has been around for more than half a decade, but this is the first time EKs are broadly adopting the technique.


Samsung adds two modems to help enable wider 5G rollout


"Samsung has tapped its leadership in semiconductor and network technology–and combined it with its expertise in 5G research and development–to introduce one of the industry's first SoC 5G New Radio modems: the S8600 and S9100," Johnston wrote. ASICs based System-on-a-Chip (SoC) product designs have become popular because they are more power efficient and have increased operating frequency capabilities, addressing the high-volume, mass production requirements that the industry is now demanding. "These new modems support two architectural options for operators. The S8600 powers Samsung's Digital Unit in separated radio-digital configurations for both 4G and 5G, while the S9100 powers Samsung's 5G integrated Access Unit," he added in his blog post about the new modems. Johnston added that most companies are opting for more power-conscious circuits that are permanent and application-specific, as opposed to circuitry that needs to be programmed or reconfigured. The new Samsung tools will help support 5G networks that are easier to enable, smaller in size and more efficient in how they use power, he said.



The Impact of Cloud Computing on the Insurance Industry

the cloud computing in insurance
Companies that use cloud systems greatly reduce the cost of purchasing hardware and software, thanks to on-demand and pay-per-use optics. They no longer have to buy local servers and data centers, which require specialized personnel to manage and maintain, and which take up physical space and consume electricity 24 hours a day, 7 days a week. And, since most services are provided on-demand, you can have access to abundant computing resources quickly, easily, and with the flexibility your business needs, and without an expensive hardware or software investment. All of this is in favor of optimizing performance and internal processes, also because, by hosting platforms, software, and databases remotely, you’re able to free up memory and computing power on individual machines within the organization. Optimization and efficiency also apply to the production of documents, such as policies, forms, and contracts of various kinds.


T-Mobile data breach affects more than 1 million customers


Few details of the breach have been made public, other than the fact that it was a cyber attack and that approximately 1.5% of T-Mobile’s 75 million customers were affected – about 1.1 million. T-Mobile added that the suspicious activity was initially spotted at the beginning of November, with criminal hackers accessing the information of prepaid wireless account holders. Although the organisation promptly reported the incident to the authorities, it has waited until now to inform customers and the public – presumably to ensure it had all the facts straight. There are few things worse than announcing the details of a data breach only to later find that things are much worse than you initially thought. This happens all too often, with organisations facing an initial backlash, then adding fuel to the fire with more bad news. Because the breach occurred in the US rather than the EU, it isn’t subject to the GDPR (General Data Protection Regulation), which would have required T-Mobile to inform customers within 72 hours of learning about it.


Why the IT4IT™ Standard is Key to Driving Business Value for CIOs


The IT4IT standard provides the CIO with a holistic overview on what his organization is doing well, what needs improvement, as well as highlighting how to improve upon the gaps across the business. Three use cases that support transformation that the IT4IT standard helps accelerate are re-architecting to co-create strategy with the business; rationalizing the application portfolio to reduce waste and free up funds for innovation programs; and driving automation by analysizing and selecting integration points for automation to improve the quality and speed of product and service delivery. The pressure to continually innovate and adopt the most effective solutions is likely to remain in today’s business landscape. But in order to create real value, today’s CIO must not only focus on innovation but on empowering the IT system to work as a competitive driver. They must think holistically and prioritize the management of IT processes to meet the demands of customers, increased competition as well as a changing business climate.


Adoption of Cloud-Native Architecture, Part 1: Architecture Evolution and Maturity

Software design practices like DDD and EIP have been available since 2003 or so and some teams then had been developing applications as modular services, but traditional infrastructure like heavyweight J2EE application servers for Java applications and IIS for .NET applications didn't help with modular deployments. With the emergence of cloud hosting and especially PaaS offerings like Heroku and Cloud Foundry, the developer community had everything it needed for true modular deployment and scalable business apps. This gave rise to the microservices evolution. Microservices offered the possibility of fine-grained, reusable functional and non-functional services. Microservices became more popular in 2013 - 2014. They are powerful, and enable smaller teams to own the full-cycle development of specific business and technical capabilities. Developers can deploy or upgrade code at any time without adversely impacting the other parts of the systems.


Why your CEO’s personal risk taking matters


People expect CEOs to be risk takers, which makes sense given the nature of the job. That belief may be why corporate boards have been relatively forgiving of the kind of eccentric, grandiose, and sometimes dangerous behavior that the media laps up — and that the public and investors question when it is exposed. After all, it matches the “risk seeker” stereotype. But the #MeToo movement and the occasionally egregious behavior of bubble-economy CEOs suggests that times are changing. Boards and shareholders want to be confident not only that CEOs are comfortable taking business risks, but that they have good judgment about which risks to pursue and when to take a pass. “CEOs meaningfully outscore other executives in embracing risk, while still scoring within an optimal range,” the executive search firm Russell Reynolds concluded in a 2016 study based on an analysis (pdf) of psychometric profiles of more than 6,000 CEOs. The best-in-class CEOs also score high on judgment and low on self-promotion; they project a collected demeanor.


The top technologies that enabled digital transformation this decade


Forrester recently said that enterprises across the world are increasingly turning to automation for a variety of tasks that used to be handled by humans. This is changing the workforce on a fundamental level, prompting fears in the next decade of mass job losses. But the field is also making enterprises better in a variety of concrete ways. Dangerous, time-consuming jobs at factories are increasingly being done by an army of robots, keeping people away from positions that have historically been damaging to their health. This has even bled into other fields like customer service, where many companies now use automated systems to respond to basic questions and complaints from consumers. Part of what's spurring the increase in automation is the advancement of artificial intelligence (AI), which is equipping robots and machines with a wider set of capabilities. Enterprises are using AI for everything from security to human resources, allowing computers to handle tasks that have become costly or redundant. While fears of automation and AI are very real, recent studies have shown that people actually like the introduction of automation and are generally happy computers or robots can handle menial tasks.


State police: We've been testing Spot robot dogs for use in dangerous situations


As per the agreement, MSP's bomb squad wanted to evaluate Spot in "law-enforcement applications, particularly remote inspection of potentially dangerous environments which may contain suspects and ordinances". The loan of Spot was uncovered by the American Civil Liberties Union (ACLU) of Massachusetts, which filed a public records request shortly after discovering a Facebook post by the Massachusetts State Police about an event on July 30 where it would explore the use of robotics in law-enforcement operations. An MSP spokesperson told WBUR that Spot was used as a "mobile remote observation device" that provided police with images of suspicious devices or potentially dangerous situations, such as where an armed suspect might be hiding. "Robot technology is a valuable tool for law enforcement because of its ability to provide situational awareness of potentially dangerous environments," state police spokesman David Procopio wrote. Spot has a 360-degree camera, crash protection, and can work tough environments. It has a top speed of 3mph and can carry a payload of 14kg, or 31lb.


Looking into an intelligent cloud future

Looking into an intelligent cloud future
Self-balancing deployment models. Now we have public clouds, private clouds, traditional on-premises systems, edge-based computing, and more, and all these platforms can run systems and store data. The platforms will have many more capabilities in 10 years, and thus the core question becomes, What do you run, where? Hopefully, we’ll have self-migrating and self-balancing workloads figured out by next decade. Core enabling technology will determine where workloads and data sets should reside and move them there using automated back-end systems. This means that when you deploy an application workload on any type of system, the workload will understand what resources are available to it and self-migrate to the most optimal available platform. Criteria for the platform of choice will include lowest costs, fastest performance, and location closest to the application and data consumers. Punitive security automation. Hackers are getting more creative about how they attack systems in the public clouds. Right now, public cloud security is better than traditional system security, so hackers still focus on traditional systems as easy prey.



Quote for the day:


"Education makes people difficult to drive, but easy to lead; impossible to enslave, but easy to govern." -- Lorn Brougham


Daily Tech Digest - November 25, 2019

Avoiding the pitfalls of operating a honeypot

honey jar dripper
Operators of honeypots sometimes desire to trick the hacker into downloading phone-home and other technologies for purposes of identifying the hacker and/or better tracking their movements. Understand that downloading programming and other technology onto someone’s systems or attempting to access their systems without their knowledge or consent almost certainly violates state and federal anti-hacking laws – even if done in the context of cyber security. Penalties for these activities can be substantial and harsh. Never engage in such activities without the involvement and direction of law enforcement. ... Except for interactions with law enforcement, uses of personally identifiable information should be strictly avoided. Only aggregated or de-identified information should be used, particularly in the context of any published reports or statistics regarding operation of the honeypot. ... The law regarding entrapment is complicated, but if someone creates a situation intended solely to snare a wrongdoer, there is the potential for an argument this constitutes entrapment. In such a case, law enforcement may decline to take action on information gained from the honeypot.


Exploit code published for dangerous Apache Solr remote code execution flaw

Apache Solr
At the time it was reported, the Apache Solr team didn't see the issue as a big deal, and developers thought an attacker could only access (useless) Solr monitoring data, and nothing else. Things turned out to be much worse when, on October 30, a user published proof-of-concept code on GitHub showing how an attacker could abuse the very same issue for "remote code execution" (RCE) attacks. The proof-of-concept code used the exposed 8983 port to enable support for Apache Velocity templates on the Solr server and then used this second feature to upload and run malicious code. A second, more refined proof-of-concept code was published online two days later, making attacks even easier to execute. It was only after the publication of this code that the Solr team realized how dangerous this bug really was. On November 15, they issued an updated security advisory. In its updated alert, the Solr team recommended that Solr admins set the ENABLE_REMOTE_JMX_OPTS option in the solr.in.sh config file to "false" on every Solr node and then restart Solr.



Stateful Serverless: Long-Running Workflows with Durable Functions

There are a few reasons the workload doesn’t appear to be a good fit for Azure Functions at first glance. It runs relatively long (the example was just part of the game; an entire game may take hours or days). In addition, it requires state to keep track of the game in progress. Azure Functions by nature are stateless. They are designed to be quickly run self-contained transactions. Any concept of state must be managed using cache, storage, or database. If only the function could be suspended while waiting for asynchronous actions to complete and maintain its state when resumed. The Durable Task Framework is an open source library that was written to manage state and control flow for long-running workflows. Durable Functions build on the framework to provide the same support for serverless functions. In addition to facilitating potential cost savings for longer running workflows, it opens a new set of patterns and possibilities for serverless applications. To illustrate these patterns, I created the Durable Dungeon. This article is based on a presentation I first gave at NDC Oslo.


The Edge of Test Automation: DevTestOps and DevSecOps

On the edge
DevTestOps allows developers, testers, and operation engineers to work together in a similar environment. Apart from running test cases, DevTestOps also involves writing test scripts, automation, manual, and exploratory testing. In the past few years, DevOps and automation testing strategies have received a lot of appreciation because teams were able to develop and deliver products in the minimum time possible. But, many organizations soon realized that without continuous testing, DevOps provide an incomplete delivery of software that might be full of bugs and issues. And that’s why DevTestOps was introduced. Now, DevTestOps is growing in popularity because it improves the relationship between the team members involved in a software development process. It not only helps in faster delivery of products but also provides high-quality software. And when the software is released, automated test cases are already stored in it for future releases.


Q&A with Tyler Treat on Microservice Observability

A common misstep I see is companies chasing tooling in hopes that it will solve all of their problems. "If we get just one more tool, things will get better." Similarly, seeking a "single pane of glass" is usually a fool’s errand. In reality, what the tools do is provide different lenses through which to view things. The composite of these is what matters, and there isn’t a single tool that solves all problems. But while tools are valuable, they aren’t the end of the story. As with most things, it starts with culture. You have to promote a culture of observability. If teams aren’t treating instrumentation as a first-class concern in their systems, no amount of tooling will help. Worse yet, if teams aren’t actually on-call for the systems they ship to production, there is no incentive for them to instrument at all. This leads to another common mistake, which is organizations simply renaming an Operations team to an Observability team. This is akin to renaming your Ops engineers to DevOps engineers thinking it will flip some switch. 


8 ways to prepare your data center for AI’s power draw

2 data center servers
Existing data centers might be able to handle AI computational workloads but in a reduced fashion, says Steve Conway, senior research vice president for Hyperion Research. Many, if not most, workloads can be operated at half or quarter precision rather than 64-bit double precision. “For some problems, half precision is fine,” Conway says. “Run it at lower resolution, with less data. Or with less science in it.” Double-precision floating point calculations are primarily needed in scientific research, which is often done at the molecular level. Double precision is not typically used in AI training or inference on deep learning models because it is not needed. Even Nvidia advocates for use of single- and half-precision calculations in deep neural networks. AI will be a part of your business but not all, and that should be reflected in your data center. “The new facilities that are being built are contemplating allocating some portion of their facilities to higher power usage,” says Doug Hollidge, a partner with Five 9s Digital, which builds and operates data centers. “You’re not going to put all of your facilities to higher density because there are other apps that have lower draw.”


Kubernetes meets the real world

Kubernetes meets the real world
Kubernetes is enabling enterprises of all sizes to improve their developer velocity, nimbly deploy and scale applications, and modernize their technology stacks. For example, the online retailer Ocado, which has been delivering fresh groceries to UK households since 2000, has built its own technology platform to manage logistics and warehouses. In 2017, the company decided to start migrating its Docker containers to Kubernetes, taking its first application into production in the summer of 2017 on its own private cloud. The big benefits of this shift for Ocado and others have been much quicker time-to-market and more efficient use of computing resources. At the same time, Kubernetes adopters also tend to cite the same drawback: The learning curve is steep, and although the technology makes life easier for developers in the long run, it doesn’t make life less complex. Here are some examples of large global companies running Kubernetes in production, how they got there, and what they have learned along the way.


HP to Xerox: We don't need you, you're a mess


The HP Board of Directors has reviewed and considered your November 21 letter, which has provided no new information beyond your November 5 letter. We reiterate that we reject Xerox's proposal as it significantly undervalues HP. Additionally, it is highly conditional and uncertain. In particular, there continues to be uncertainty regarding Xerox's ability to raise the cash portion of the proposed consideration and concerns regarding the prudence of the resulting outsized debt burden on the value of the combined company's stock even if the financing were obtained. Consequently, your proposal does not constitute a basis for due diligence or negotiation. We believe it is important to emphasize that we are not dependent on a Xerox combination. We have great confidence in our strategy and the numerous opportunities available to HP to drive sustainable long-term value, including the deployment of our strong balance sheet for increased share repurchases of our significantly undervalued stock and for value-creating M&A.


A new era of cyber warfare: Russia’s Sandworm shows “we are all Ukraine” on the internet

Cyber warfare  >  Russian missile launcher / Russian flag / binary code
This was “the kind of destructive act on the power grid we've never seen before, but we've always dreaded.” Even more concerning, “what happens in Ukraine we'll assume will happen to the rest of us too because Russia is using it as a test lab for cyberwar. That cyberwar will sooner or later spill out to the West,” Greenberg said. “When you make predictions like this, you don't really want them to come true.” Sandworm’s adversarial attacks did spill out to the West in its next big attack, the NotPetya malware, which swept across continents in June 2017 causing untold damage in Europe and the United States, but mostly in Ukraine. NotPetya, took down “300 Ukrainian companies and 22 banks, four hospitals that I'm aware of, multiple airports, pretty much every government agency. It was a kind of a carpet bombing of the Ukrainian internet, but it did immediately spread to the rest of the world fulfilling [my] prediction far more quickly than I would have ever wanted it to,” Greenberg said. The enormous financial costs of NotPetya are still unknown, but for companies that have put a price tag on the attack, the figures are staggering. 


Lessons Learned in Performance Testing


To remind ourselves, throughput is basically counting the number of operations done per some period of time (a typical example is operations per second). Latency, also known as response time, is the time from the start of the execution of the operation to receiving the answer. These two basic metrics of system performance are usually connected to each other. In a non-parallel system, latency is actually an inverse of throughput and vice versa. This is very intuitive - if I do 10 operations per second, one operation is (on average) taking 1/10 second. If I do more operations in one second, the single operation has to take less time. Intuitive. However, this intuition can easily break in a parallel system. As an example, just consider adding another request handling thread to the webserver. You’re not shortening the single operation time, hence latency stays (at best) the same, however, you double the throughput. From the example above, it’s clear that throughput and latency are essentially two different metrics of a system. Thus, we have to test them separately.



Quote for the day:


"Becoming a leader is synonymous with becoming yourself. It is precisely that simple, and it is also that difficult." -- Warren G. Bennis


Daily Tech Digest - November 24, 2019

Could Process Mining Be Bigger Than RPA (Robotic Process Automation)?

science formula and math equation abstract background
“Process mining is an easy idea,” said Rinke. “But it is hard to make it work right for organizations. You need to collect large amounts of data from all sorts of IT systems. You also need to go beyond integrations and must understand the databases that are underneath. And all are customized.” No doubt, a key driver for Celonis has been the rapid growth of RPA (Robotic Process Automation). “In RPA, you'll often get to the first low-hanging opportunities by asking people what routines take up most of their time,” said Antti Karjalainen, who is the CEO of Robocorp. “As companies progress in their automation journey, data-driven technologies become an important part of identifying opportunities. People might not even realize how their own work is related to work done in other areas of the company and process discovery technologies can uncover these hidden workflows.” But the Celonis software is not just for upfront analytics. It is something that is useful for ongoing monitoring to make sure that an RPA implementation is on track.



Designing for Flexibility

You should build the systems around the Processes, not the Organizations. That way, you could change the systems all you want and it wouldn’t affect the Organizations ... or, you could change the Organizations all you want and it wouldn’t affect the systems. That is, there is a “many-to-many” relationship between Process and Organization. (Any one Process may be performed by many Organizations and any one Organization may perform many Processes. Organization and Process are independent variables. Orthogonal.) Apparently this Process to Organization independence is still not very well understood. Within the last two or three years, I heard Steve Towers a notable figure in the Process Management community speaking at a Conference in Bangalore India emphasizing a strong point, “The Process TRANSCENDS the organization!” That is, a Process may have many Organizations involved and conversely, an Organization may be involved in many Processes. That is, once again, there is a many-to-many relationship between Processes and Organizations ... or, they are “independent variables.” Dewey had figured that out sometime before I found him in 1970.


Robotic Process Automation Analytics: KPIs for Your RPA Deployment

Robotic-Proces- Automation-Analytics-KPIs-for-Your-RPA-Deployment-CiGen-RPA-Australia.jpg
This means that scaling is on an ascending trend, probably because the CEOs have started to realise the benefits of enterprise-wide deployment. There can be no denial that the KPIs make a significant contribution to this trend. If this is so, it is legitimate to wonder about ways of optimally setting the KPIs for your RPA deployment. The question is related to the subject of choosing appropriate metrics for a comprehensive assessment of ROI, beyond the financial impact of leveraging RPA in your company. In fact, the need to track various kinds of benefits, some of which are plain to observe and easy to calculate (the reduced costs of implementation, etc.) is a precondition of obtaining accurate estimates of the ROI made possible by automation. Setting your robotic process automation KPIs can thus be seen as a road opener for measuring ROI, which is itself a business metric.



A Leading EA Tool Comes with Extras

All these other competencies that come into play if your business change initiative is to be successful. This second aspect triggered an interesting thought in our minds, which is that an enterprise architecture initiative’s success depends on more than how capable the EA management suite is. Sure, that’s the most visible variable, but it too exists within a landscape of factors, each of which contribute significantly towards the desired outcome. Just think of your vehicle. It may seem like all it needs to run is gasoline but try and drive your family’s car around without changing the coolant, engine oil, or the windshield washer fluid and you’re probably not going to get very far. Therefore, should you find yourself in a situation to procure an EA tool for your organization in the future, remember that while having a mature and competent EA platform is vital, there are other aspects which you should not ignore lest you place a very low ceiling on your transformation initiative. These are the extras that a great EA tool comes with.


Automating our future: an inside look at robotic process automation


Businesses are deploying RPA to efficiently manage large-scale processing in ways that are customizable throughout each individual business. RPA is currently being used across almost all industries and functions, including IT, finance and accounting, human resources and customer service. RPA can be leveraged for an array of tasks – whether it is auto-populating forms or spreadsheets, organizing incoming information or processing transactions. What’s “new” about RPA is that benefits are generated for employees and businesses alike. For example, State Auto, a super-regional insurance holding company headquartered in Columbus, Ohio, uses RPA for back-office tasks. Auditors at State Auto go through thousands of policies to determine recommendations for changing rates. Policies that don’t need to be audited still have to be documented, which requires performing routine data-entry with two separate and unconnected systems, selecting codes and making drop-down selections. With robots in place, this activity happens rapidly and error-free, releasing individuals from time-consuming, and mind-numbing, tasks.


New bypass disclosed in Microsoft PatchGuard (KPP)

patchguard.png
After Windows 10's release in 2015, the most notable of all PatchGuard bypass was GhostHook, discovered by CyberArk researchers in 2017. GhostHook abused the Intel Processor Trace (PT) feature to bypass PatchGuard and patch the kernel. A second bypass was discovered and disclosed over the summer, in July. Found by Nick Peterson, anti-cheat expert at Riot Games, this bypass was named InfinityHook, and abused the NtTraceEvent API to patch the kernel. Describing the bypass at the time, Peterson said "InfinityHook stands to be one of the best tools in the rootkit arsenal over the last decade." Last month, a third PatchGuard bypass was disclosed; this time by Turkish software developer Can Bölük. Named ByePg, this exploit hijacks the HalPrivateDispatchTable to allow a rogue app to patch the kernel. Just like Peterson, when describing ByePg, Bölük used said that the "weaponization potential of [ByePg] is only limited by your creativity."


Human Face to Enterprise Architecture


When performing market analysis and defining the journeys, an important step is to prioritize things accordingly. First step of prioritization is to define the customer personas or segments, that are of value or interest for the business. These may be the key target audience for the product or service, or they may be the most challenging segments (e.g. people likely to churn), so a business initiative can be focused on smoothing the experience for these people and reducing the likelihood of unwanted events. When the key personas or customer segments are defined, the journeys can be prioritized next. The same user can have multiple potential journeys that interact with the business. They can come from a targeted acquisition campaign or discover the service organically. They may have a positive or negative previous experience with this type of services, etc.. Just like it is important to prioritize the customer types, it is important to prioritize the journeys for those customers. Once those decisions are made, it is clear which customer journey the business is working with. This is when an enterprise architecture bit can be added into the picture.


Eliot Bendinelli, a technologist with UK non-profit Privacy International, says the organization wanted data protection agencies to take action because it believed there was a fundamental problem with the tracking industry. Its project began with an investigation into sales in the field of ad tech companies, credit rating agencies, ad blockers, and related organizations, he says. "We were building a case, and basically we think what they're doing is unlawful," Bendinelli continues. While waiting for agencies to act, the research team wanted to find an example of how tracking is taking place on Web pages where people go to read and share sensitive data. "We wanted a concrete example of how tracking is happening on websites where you think you are safe, and where you are looking up or exchanging data that is sensitive and personal," he adds. They chose sites related to mental health because, as Bendinelli puts it, people may research mental health conditions online because they aren't yet ready to discuss it in person.


Why you should care about robotic process automation

RPA article
RPA expects program, system, and even network heterogeneity. RPA evolved to eliminate gaps in workflows or processes that span disparate GUI-based systems. A history lesson might be helpful here. In the 1990s, packaged software suites emerged to displace fit-for-purpose GUI and text-based applications. An insurance company might once have depended on a mix of custom-built and commercial systems to support key processes such as enrollment, billing, claim filing, and claim adjustment; by the late-1990s, however, packaged applications were able to replicate many (if not most) of the features and functions these systems provided. But not all of them. More important, some subset of function-specific systems just couldn’t be replaced. The upshot was that even as enterprises restructured their business processes to accommodate packaged suites, they kept some of these processes (and their supporting IT systems) intact, too. The neat thing about RPA is that its software bots run alongside the GUI-based program(s) on the existing system


Cloud Migration with the Help of EA

Creating a fully-functional, efficient target cloud architecture (complete with any necessary intermediary states) that accounts for the organization’s level of cloud maturity. Developing a coherent, model-based plan for transitioning from the existing systems to a cloud-focused future state is invaluable. Since cloud technology precipitates a decline in the number of both software and hardware components, it produces a change in how the IT stack works, how it’s serviced. It also triggers additional changes on the staff side – the roles that are needed, how they interact with each other now, or if there are any redundancies. Ultimately, you need to develop a new picture of how people, processes, technology and capabilities function in the new cloud paradigm. What better way to get everyone informed, engaged, and feeling in control than by providing them with clear EA deliverables that explain how things are evolving going forward? Enterprise architects have an opportunity here to deliver immense value to a wide array of stakeholders.



Quote for the day:


"Being honest and open is the only way to convince cynical employees that you truly want to establish a partnership with them." -- Florence M. Stone