Daily Tech Digest - April 13, 2019

Android Devices Can Now Be Used as a Security Key

Android Devices Can Now Be Used as a Security Key
The new feature delivers what may be an easier alternative to Google's Titan Security Key, which the company introduced in September 2018. The Titan key bundle has two components, one of which slots into a computer via USB and another that authorizes a login via Bluetooth. The Titan keys became central to Google's Advanced Protection Program, which the company launched after years of attacks against activists, journalists and political campaign workers. In one of the most notable incidents, Hillary Clinton's chief of staff John Podesta saw his personal emails released in 2016 after suspected Russian hackers compromised his account. More recently, email accounts of four senior aides within the National Republican Congressional Committee were compromised for several months. The Titan keys were so successful internally at Google that they were rolled out to the public. Not one of Google's 85,000 employees accounts fell to a phishing attack after the keys were launched in early 2017, computer security writer Brian Krebs reported last year. Android devices running 7.0 and above are compatible with the new security feature that's an alternative to Titan.



How insurers can achieve data mastery

Most data and analytics IT budgets are smaller than what is needed, forcing the organization to prioritize the use cases. In the many instances when data projects are IT-led, there are not metrics in place to measure their impact on business objectives. As a result, the value of many data and analytics projects are unknown. Given the focus on data and analytics, innovative insurance CIOs expecting to develop more futurist corporate cultures by 2030 will build, buy or acquire technology and data capabilities to create new and differentiated products, services and business models — creating more competition for incumbent insurers. The investments of these innovative companies are likely to average twice what their more conservative peers make. They’ll consider data their single most important asset, and disrupt the industry by setting data mastery leaders apart from those that lag behind. These companies will find greater impact from data and analytics initiatives, and use this capability to drive new business models, such as the monetization of data.


Members of the group are government departments such as the Cabinet Office, Ministry of Defence, Ministry of Justice, NHS, Department for Work and Pensions, Department for Business, Energy and Industrial Strategy, the Foreign and Commonwealth Office, Met Office, HM Revenue and Customs, and others. Government departments will then liaise with its various suppliers – for example, Defra would talk mainly to Microsoft about cloud-related issues, as well as other companies providing similar services to the department, such as Amazon Web Services (AWS). The sustainable technology advice and reporting group also works closely with techUK on a number of issues including cloud-related measures. Progress has been reported by the government around the awareness and use of sustainable digital services and technologies, according to the latest Defra report on sustainability. The annual report said departments across government have significantly reduced staff members’ individual energy footprints to 891KWh/staff from a baseline figure of 1467 KWh/staff.


How to overcome the risk of overcorrecting

Certainly, no normal person wishes any misfortune on their fellow human, and as a frequent traveler, I have a soft spot for the hundreds of people who invest their sweat equity into allowing me to travel around the world with relative ease and comfort. However, a placard in a nice hotel that implies the very reason hotels exist— paying customers—are an entity to be regarded with deep suspicion to the point that anyone who interacts with them requires a panic button seems like a significant overreaction. While this example might seem inane to the point of comedy, we as individuals and collective organizations are likely guilty of similar transgressions. The average employee handbook at a Fortune 500 company implies that employees are one step away from thievery, gross ethical lapses, and general tomfoolery without repeated admonishments from HR to the contrary. 


government
“Technology moves incredibly rapidly, but you’ve got quite slow paced regulation. It became clear that we needed to address some of the regulatory concerns in the digital age, but we also put forward a fundamental recommendation about how regulation happens in the future,” he says. The proposal at the heart of the Committee’s Regulating in a digital world report, Gilbert explains, is a new Digital Authority that will make sure existing regulators are addressing key issues. The authority will also carry out a horizon scanning role, preparing for regulatory issues that could arise over time. “The job of the authority is to make sure there is nothing falling between the gaps of the existing regulators. That’s why the chief executives of the most prominent regulators in this space should sit in the same room and jointly discuss the issues,” he says. “There should then be a number of independent members, who are not part of the existing regulatory framework. There should be a powerful, independent chair who is not afraid to stand up to government and argue to parliament that more powers are needed.”


In Security, All Logs Are Not Created Equal

Of the major data breach vectors, holes in web applications – which typically have access to highly sensitive customer account information – represent the greatest percentage, according to the "2018 Verizon Data Breach Investigations Report." Unfortunately, security teams have the least visibility into web application logs. In addition, parsing web server logs is challenging because they are often in a multi-line or custom format and logged in a nonstandard way to a text file or database, as opposed to the native web server log, such as Microsoft IIS or Apache. ... Logging events from a web application firewall (WAF) already watches for potentially malicious actions. DNS server logs provide rich information about what sites users visit, and they show whether any malicious applications reach out to command-and-control sites. However, DNS also is a common tunneling protocol for exfiltrating data since firewalls typically allow the data out. DNS logs are challenging because of the volume of data, their multi-line format, and the difficulty posed in exporting them.


The Business Value of Cybersecurity

security metrics
This lack of details around the quantification of the tangible value of following cybersecurity best practices is a problem. In fact, we believe it is an important reason why the issue is still shifting in and out of most boards’ radars. Gut feeling alone does not make for a strong-enough case: Top executives are increasingly asking “Show me the data”. Beyond the fact that measuring success in the cybersecurity is very hard, another issue is the stringent lack of meaningful data. This is a really big problem in the field of cyber insurance, for example, which struggles to fit its traditional actuarial models around the scarce data they can get a hold of. The reason for that is quite simple: most organizations are still very reluctant to share what they perceive as highly sensitive cybersecurity data. ... Being able to show key stakeholders in business terms what exactly is the tangible value-added of cybersecurity will be key in finally anchoring the topic at the right level of organizations. Money – and data ­– talk. And boards usually listen. But we’re not there yet and cybersecurity looks definitely like a promising path for data-driven research.


Reflecting on Top-Down or Bottom-Up System Design

Vernon emphasizes that both perspectives are important, but if you are thinking about bounded contexts with their ubiquitous languages you are probably taking a bottom-up approach. Then it’s important to learn what the core domain within your whole solution in, otherwise you may very well put your efforts in the wrong place. If you need to take a top-down perspective at some point, you can define an interface as a placeholder for something you do not yet understand how to communicate with. Later, this placeholder can be replaced by an implementation in a bounded context. As you do that, you also shift to a bottom-up perspective because you now need a deep understanding of the domain model within that context. For Vernon, a bottom-up approach is also an emergent approach which leads to an emergent architecture, and for him is the same as lean architecture or agile architecture and design. Trying to make your architecture immovable will not work; he believes that you will either adjust your architecture, or fail.


Government breach data highlights cyber skills misconception


“If only 36% of information security or governance staff are attending courses, it is not clear how organisations can say they are confident that their security teams have the skills and knowledge they require to be effective.” Even if is an active training programme, Hadley contended that the skills learned are out of date before people leave the classroom, creating false security and human vulnerabilities because information security professionals are lagging behind the attackers. This is because attackers, unlike businesses and charities, are free to innovate and do not have to work within the limits of the law and security budgets, while few training programmes or computer science and certification courses are geared up to be updated on a daily or weekly basis to include new threats that have been identified.  “They build faster than development teams and change attacks quicker than traditional training can cope with,” said Hadley. “This is the same ‘time gap’ that plagued signature-based AV [antivirus] engines for years.


drone.jpg
At first glance, the creation of a forum focusing on preserving human control in warfare is a step in the right direction. But Australia's Trusted Autonomous Systems Defence Cooperative Research Centre (TASDCRC) chief scientist and engineer Jason Scholz, who spoke with TechRepublic, said that while the forums have good intentions, they have not been comprehensive enough in discussing the systems of control when using a weapon. "It's not only about what can happen in terms of selecting a target and engaging it, autonomously or not, but the reliability of the weapon ... the context in which it's used, training, certification of the people and technology, authorisation, no one from a confident military sticks a weapon directly into a target without having an entire system of control," Scholz said. The most recent GGE on LAWS meeting took place in Geneva in late March, with over 90 countries in attendance. Much like the previous meetings, little progress was made about the use of lethal autonomous weapons.



Quote for the day:


"You will face your greatest opposition when you are closest to your biggest miracle." -- Shannon L. Alder


Daily Tech Digest - April 11, 2019

Build up a DevSecOps pipeline for fast and safe code delivery

Developers in need of a feature -- or simply in a rush -- might pull random Docker images that contain vulnerabilities from public internet repositories. Developers should always treat public container registries with extreme caution. Registry platforms, such as Harbor or even a self-hosted private local registry for the company, offer tighter control over what users deploy within an environment, and also streamline versioning and management to ensure DevSecOps doesn't impede code velocity. Docker Hub also offers certified images, but always exercise vigilance to minimize risk. Other tools ensure code builds don't ship with known vulnerabilities. For example, to prevent the release of software with vulnerable libraries, the auditing tool Open Security Content Automation Protocol (OpenSCAP) scans systems in the delivery pipeline and checks them against the Common Vulnerabilities and Exposures (CVE) library. There are several CVE feeds, both free and paid, that OpenSCAP can use. 


Upgrading from Java 8 to Java 12

Generally speaking each new release performs better than the previous one. "Better" can take many forms, but in recent releases we've seen improvements in startup time; reduction in memory usage; and the use of specific CPU instructions resulting in code that uses fewer CPU cycles, among other things. Java 9, 10, 11 and 12 all came with significant changes and improvements to Garbage Collection, including changing the default Garbage Collector to G1, improvements to G1, and three new experimental Garbage Collectors (Epsilon and ZGC in Java 11, and Shenandoah in Java 12). Three might seem like overkill, but each collector optimises for different use cases, so now you have a choice of modern garbage collectors, one of which may have the profile that best suits your application. The improvements in recent versions of Java could lead to a cost reduction. Not least of all tools like JLink reducing the size of the artifact we deploy, and recent improvements in memory usage could, for example, decrease our cloud computing costs.


Atlassian targets agile development at scale with Jira Align

Agile Development
“There are a lot of benefits [with agile development]: it allows enterprises to be more nimble, to respond quickly to pressure and change to their roadmaps as needed based on customer demands,” he said. “But one thing that we have lost through that transformation is the certainty, the visibility and the clarity across an organization of when deadlines will be hit, and when capabilities will be available for customers. Agile simply just doesn't work that way. And that is a big challenge, especially for these very large organizations that need to be more nimble. “Our biggest customers were looking for guidance for how to scale all of this agile development goodness across thousands of people,” Deatsch said. “That is actually exactly what AgileCraft does.” As an example, Deatsch said a large bank could be building a new mobile app, an effort that could involve large numbers of developers working on individual, but related, projects – from building a front-end UI to back-end transactions systems.


Cube.js: Ultimate Guide to the Open-Source Dashboard Framework

Image title
The majority of modern web applications are built as a single-page application, where the front-end is separated from the back-end. The back-end also usually is split into multiple services, following a microservices architecture. Cube.js embraces this approach. Conventionally, you run Cube.js back-end as a service. It manages the connection to your database, including queries queue, caching, pre-aggregation, and more. It also exposes an API for your front-end app to build dashboards and other analytics features. ... Analytics starts with the data and data resides in a database. That is the first thing we need to have in place. You most likely already have a database for your application, and usually, it is just fine to use for analytics. Modern popular databases such as Postgres or MySQL are well suited for a simple analytical workload. By simple, I mean a data volume with less than 1 billion rows. MongoDB is fine as well; the only thing you’ll need to add is MongoDB Connector for BI. It allows executing SQL code on top of your MongoDB data. It is free and can be easily downloaded from the MongoDB website. One more thing to keep in mind is replication. It is considered bad practice to run analytics queries against your production database mostly because of the performance issues.


Finance Remains Most Attacked Sector Globally Six of the Past Seven Years

John South of the Threat Intelligence Communication Team, Global Threat Intelligence Center at NTT Security, says: Finance is yet again on the top spot when it comes to targeted attacks, which surely is enough evidence to convince the board that cybersecurity is a must-have investment. Many financial organizations are moving forward with digital transformation but without prioritizing security as a core business requirement. While legacy methods and tools are still effective at providing a solid foundation for mitigation, new attack methods are continually being developed by malicious actors. Security leaders should ensure basic controls remain a primary focus but they must also embrace innovative solutions if they provide a good fit and true value. Mr. Fumitaka Takeuchi, Security Evangelist, Vice President, Managed Security Service Taskforce, Corporate Planning at NTT Communications, says: Many organizations are caught up in simply buying solutions to problems that either dont really exist, or a solution which costs more than the potential loss being prevented.


Why Xamarin


Even after years of building for mobile, developers still heavily debate the choice of technology stack. Perhaps there isn't a silver bullet and mobile strategy really depends - on app, developer expertise, code base maintenance and a variety of other factors. If developers want to write .NET however, Xamarin has essentially democratized cross-platform mobile development with polished tools and service integrations. Why are we then second guessing ourselves? Turns out, mobile development does not happen in silos and all mobile-facing technology platforms have evolved a lot. It always makes sense to look around at what other development is happening around your app - does the chosen stack lend itself to code sharing? Are the tools of the trade welcoming to developers irrespective of their OS platform? Do the underlying pillars of chosen technology inspire confidence? Let's take a deeper look and justify the Xamarin technology stack. Spoiler - you won't be disappointed. 


LambdaTest Selenium Testing Tool Tutorial with Examples in 2019

Image 1 for LambdaTest Selenium Testing Tool Tutorial with Examples in 2019
LambdaTest Selenium Grid is a scalable, secure, and reliable cloud based Selenium grid. It lets you perform automated cross browser testing across all major browsers and various browser versions, latest and legacy and across operating systems. It also lets you run your multiple selenium automated tests in parallel which allows you to cut down on your build time. It also provides you with screenshots from over 2000 mobile and desktop browsers, so you can perform visual cross browser compatibility testing and there is no need to test for each browser manually as you get full paged screenshots by just selecting the configurations. ... The thing about Selenium Grid is that it can be expensive to setup additional machines as Nodes, and this is where an Online Selenium Grid (SaaS) can truly shine. They offer various packages from entry level pricing to enterprise packages. And usually, the price for cloud solutions often scale linear with the number of tests and the concurrency of tests. Which means that you can scale according to your needs and can keep the cost under control accordingly as well.


Microservices and Distributed Transactions

Figure 2: a transaction that spans two applications and two resource managers
The usage of the two-phase commit protocol has been debated a lot since its inception. On one side, the enthusiasts tried to use it in every circumstance; on the other side, the detractors avoided it in all the situations. A first note that must be reported is related to performance: with every consensus protocol, the two-phase commit increases the time spent by a transaction. This side effect can’t be avoided and it must be considered at design time. It’s even common knowledge that some resource managers are affected by scalability limits when they manage XA transactions: this behavior depends more on the quality of the implementation than on the two-phase commit protocol itself. The abuse of two-phase commit severely hurts the performance of a distributed system, but trying to avoid it when it's the obvious solution leads to baroque and over engineered systems that are difficult to maintain. More specifically, the integration of already existing services requires serious re-engineering when both the transactional behavior must be guaranteed and a consensus protocol like the two-phase commit is not used.


Craft your data stores with VM storage performance in mind


One data store is almost never enough; you usually need multiple data stores, but fewer than the number of VMs you have. Modest performance VMs can share a data store; you might put six to 12 modest VMs on a single data store, but just don't put all of one kind of VM on the same data store. You'll be in a world of hurt if you keep all your Windows domain controllers on a single data store. The data store might become saturated and slow, and if you accidentally delete it, you won't have any of the necessary controllers left. Try not to place more than 12 VMs on a data store because they all share the queues and performance of the data store, and all of them will suffer if the queues become saturated. Usually, VMs share data stores with other VMs, but high-performance VMs that need multiple disks and multiple SCSI controllers also need multiple data stores. In certain cases, a single critical VM will have its own data store and, on rare occasions, one VM might need multiple data stores.


Samsung's Agile & Lean UX Journey

The greatest strength of a designer is understanding the users, argued Jo. They tried to get designers thinking about the users, to get users to the center of the team, he said. Have teams focus on the real problem for real users in a desirable and usable product. Samsung applies several user-centered practices to develop products. From the start of the project, the team creates personas together so that they can look forward to one goal without looking at different directions. Personas are connected to all of their activities. Jo mentioned that they used them in the scenarios, storyboards, workflows, design review, and user stories. Jo explained that since their personas are added and refined based on iterative research, they become more robust and concrete with their insights. Whenever we learn something new about users, we add or change our personas, said Jo. He stated that "the important thing is that our personas must be alive and evolving more and more, like real characters."



Quote for the day:


"When your values are clear to you, making decisions becomes easier." -- Roy E. Disney


Daily Tech Digest - April 10, 2019

The report notes that phishing attacks have become the most widespread email threat to organisations around the world, with attacks keeping pace with security controls, evolving to evade detection. “For most organisations, phishing is the number one email security threat, outranking both malware and ransomware,” the report said, highlighting the finding that one in every 99 emails is a phishing attack. “Cloud-based email, despite all of its benefits, has unfortunately launched a new era of phishing attacks,” said Yoav Nathaniel, lead security analyst at Avanan. “The nature of the cloud provides more vectors for hackers and gives them broader access to critical data when a phishing attack is successful. “Organisations are in desperate need for more information on phishing attacks and how to combat these attacks. We conducted this research to help inform organisations and shed light on how to keep sophisticated attacks out of their environment,” he said. In their analysis of emails sent to Office 365, Avanan researchers scanned every email after the default security, enabling them to see the phishing attacks that were caught as well as those that were missed.


What is a password spraying attack and how does it work?


It can be a dictionary attack where you have these common passwords that people might use. What can also be used are credentials obtained through compromised websites because many people repeat passwords across multiple sites. Usually, it's a dictionary-type [attack], but taking passwords from sites that have been compromised is also a method that would be used. It [also] depends on how targeted the attack is. If they're going after a specific person, they might try to use all of the usernames associated with a given email and try all of the passwords that may have been taken from compromised sites. [They may] also try those usernames they have against a dictionary cyberattack [of common passwords] as well. It really depends on the motive of the hacker. it would be difficult to judge an attacker's level of sophistication based on whether they use a password spraying attack or not. You'd have to look at what other mechanisms were used as part of the broader attack. Are there other things that would occur?


The Citizen’s Perspective on the Use of AI in Government


Citizens generally feel positive about government use of AI, but the level of support varies widely by use case, and many remain hesitant. Citizens expressed a positive net perception of all 13 potential use cases covered in the survey, except decision making in the justice system. (See Exhibit 1.) For example, 51% of respondents disagreed with using AI to determine innocence or guilt in a criminal trial, and 46% disagreed with its use for making parole decisions. While AI can in theory reduce subjectivity in such decisions, there are still legitimate concerns about the potential for algorithmic error or bias. Furthermore, algorithms cannot truly understand the extenuating circumstances and contextual information that many people believe should be weighed as part of these decisions. The level of support is high, however, for using AI in many core government decision-making processes, such as tax and welfare administration, fraud and noncompliance monitoring, and, to a lesser extent, immigration and visa processing. Strong support emerged for less sensitive decisions such as traffic and transport optimization.


The real challenge to achieving 5G: the networks

The real challenge to achieving 5G: the networks
What are the 5G network challenges? The overriding one is producing a network core that is fully virtualized. Currently most networks are populated with equipment that has a dedicated single purpose function (e.g., switch, router, NIC, RAN). This doesn’t work well when you want to be able to change and provision new services, network connections, and software solutions. The carriers have been moving towards Network Function Virtualization (NFV) for several years. But 5G has made it imperative. Why? Services such as network slicing, NB IoT, quality-of-service offerings, intelligence at the edge, multiple radio networks/connections, etc. all require NFV. To make NFV real, operators are installing equipment that is powered not by custom fixe- function processors, but by multi-purpose programmable servers that in many ways are similar to standard application servers in use at enterprises and in the cloud. They are fully programmable and able to run applications locally as is required for new service offerings. They are fully programmable and able to run applications locally as is required for new service offerings.


Tens of thousands of cars were left exposed to thieves due to a hardcoded password


The vulnerability, tracked as CVE-2019-9493, impacts the MyCar telematics system sold by Quebec-based Automobility Distribution. ... MyCar is one of the more advanced vehicle telematics systems, providing a wealth of useful controls. According to the MyCar website, users can use the MyCar mobile apps "to pre-warm your car's cabin in the winter, pre-cool it in the summer, lock and unlock your doors, arm and disarm your vehicle's security system, open your trunk, and even find your car in a parking lot." For these reasons, the hardcoded credentials left inside the two MyCar mobile apps were a huge security flaw. According to a security alert sent out on Monday by the Carnegie Mellon University CERT Coordination Center, before the updates, any threat actor could have extracted these hardcoded credentials from the app's source code and they could have been used "in place of a user's username and password to communicate with the server endpoint for a target user's account," granting full control over any connected cars --such as locating, unlocking, and starting any connected cars.


The role of the CIO in moving from a product company to a solutions and services provider

The role of the CIO in moving from products to solutions and services image
CIOs “have three different areas of responsibility,” according to Sanghrajka. The first surrounds how CIOs (CTOs or IT leaders) can provide technology to run operations efficiently. “That is number one,” he says. The second is about enabling technology to help businesses engage with the customer better, whether it’s employees, external customers or external partners. “How do you enable a system of engagement and create stickiness, which are factors that drive revenue growth,” asks Sanghrajka? “The first factor drives efficiency and the second factor drives revenue growth and customer loyalty,” he continues. The third concerns the actual role of the CIO. They need to become more strategically focused and play a more important role in helping their business transform from a product company to a solutions and services company. “An example of that is moving from a DVD business to a streaming service,” explains Sanghrajka. To embrace this, the role of the CIO is constantly changing.


NSS Labs CTO Jason Brvenik talks security testing challenges


There is so little transparency between what the user expects and what the product delivers, and the only way to know if something's being effective is to actually try it and the only people trying to beat defenses are the attackers right now. It's about transparency and accountability, allowing the enterprise to at least know the bounds of how much trust they should put in the capabilities being fielded, and how much opportunity they have to close that gap, and to protect their users, to protect their employees, and protect their shareholders. That's a key element -- it's necessary in the industry. It's nontrivial. It's somewhat sobering that I have a very small team that I call the 'Offensive Research' team that does the net new security testing capabilities, and we've yet to meet a product that we couldn't get past. What does that tell you? Of course, no product is perfect. We can't solve all problems in the industry. We can certainly try to make it much more difficult for somebody to steal from you and take your data.


Juniper opens SD-WAN service for the cloud

Juniper opens SD-WAN service for the cloud
The service brings with it Juniper’s Contrail Service Orchestration package, which secures, automates, and runs the service life cycle across NFX Series Network Services Platforms, EX Series Ethernet Switches, SRX Series next-generation firewalls, and MX Series 5G Universal Routing Platforms. Ultimately it lets customers manage and set up SD-WANs all from a single portal. The package is also a service orchestrator for the vSRX Virtual Firewall and vMX Virtual Router, available in public cloud marketplaces such as Amazon Web Services (AWS) and Microsoft Azure, Juniper said. The SD-WAN offering also includes integration with cloud security provider ZScaler. Contrail Service Orchestration offers organizations visibility across SD-WAN, as well as branch wired and now wireless infrastructure. Monitoring and intelligent analytics offer real-time insight into network operations, allowing administrators to preempt looming threats and degradations, as well as pinpoint issues for faster recovery.


Recent Progress in Software Security 

Perhaps the most promising advance in software security involves using runtime controls that are embedded in the execution environment. This technique is sometimes called runtime application self-protection (RASP). Through the integration of behavioral and even machine-learning controls into and around an executable, a programmed protection environment emerges—one that can compensate for code weaknesses. RASP controls, cloud development, and DevOps are all tightly woven in most software development organizations. All three aim to increase delivered code’s speed and flexibility. However, a somewhat open question is whether these three initiatives result in more secure code. Certainly, RASP will reduce the risk of any application good or bad, but it’s unclear whether programmers write better code in the presence of RASP. Nevertheless, runtime software controls will continue to influence software security, especially in the context of new self-learning methods. Machine-learning techniques have advanced to the point at which observed behaviors can serve as training data to label new variants of software exploits.


Attackers Shift to Malware-Based Cryptominers

Attackers Shift to Malware-Based Cryptominers
"Since the browser is merely an application on a device, it cannot generate the same computing power as infecting the actual device," DeBeck writes. "As a result, this type of cryptojacking takes much longer to generate each coin, which may be incentivizing threat actors to refocus on malware infections to speed things up." Another incentive for the move to malware-based mining may be the halt to the Coinhive project. Coinhive's JavaScript code mined the privacy-focused currency monero. It frequently turned up on hacked websites because it could be incorporated by anyone into a website. The project proved controversial because hackers inserted it into websites without permission. The code was freely available to install, but Coinhive took a 30 percent share of mining rewards even if it was on a hacked site, which some maintained was unethical. "With Coinhive gone, threat actors would have to go to other script providers," DeBeck writes. "While there are many other providers of the same sort of scripts, the removal of Coinhive could affect the overall ability of the technically unskilled to create web-based cryptojacking attacks."



Quote for the day:


"New capabilities emerge just by virtue of having smart people with access to state-of-the-art technology." -- Robert E. Kahn


Daily Tech Digest - April 09, 2019

How to define load models for continuous testing


A realistic workload model is the core of a solid performance test. Generating load that does not reflect reality will only give you unrealistic feedback about the behavior of your system. That's why analyzing the traffic and application to generate your performance strategy is the most important task for creating your performance testing methodology. To help one of my clients build a realistic performance testing strategy, I built a program that extracts the use of its microservices in production. The objective was to present the 20% of calls that represent 80% of the production load. Through this extraction, the program guides the project in building a continuous testing methodology for the client's main microservices. One of the biggest limitations is the lack of information stored in the http: logs or the data stored in APM products. Unfortunately, there is just too much missing information to automatically generate the load testing scripts. Technically, with a tool like my prototype, you'll have everything you need to build test scripts, test definitions, and test objectives.



Data Modeling with Indexes in RavenDB

When it comes to data modeling, indexes in relational databases usually don’t enter the equation. However, in RavenDB, indexes serve not only to enhance query performance but also to perform complex aggregations using map/reduce transformations and computations over sets of data. In other words, indexes can transform data and output documents. This means they can and should be taken into account when doing data modeling for RavenDB. Index definitions are stored within your codebase and are then deployed to the database server. This allows your index definitions to be source-controlled and live side by side with your application code. It also means indexes are tied to the version of the app that is leveraging them making upgrades and maintenance easier. Indexes can be defined in C#/LINQ or JavaScript. For this article, we’ll use JavaScript to show off this feature of RavenDB. It’s worth noting that JavaScript support for indexes supports up to ECMAScript 5 but this will increase as the JavaScript runtime RavenDB uses adds support for ES2015 syntax in the near future.


How to Build a Culture Bridge to the Cloud


The DevOps culture necessary to effectively use open source, cloud native technologies has fundamentally changed software and team processes. It is expanding how we work and think. For some, this presents an exciting opportunity. Others approach it with more trepidation. Startups, in general, are on board. They don’t have entrenched technology that needs to be maintained and upgraded. They are also able to hire people whose skill sets are a good fit with newer technologies. For enterprises, it’s a bit tougher. They have massive investments in workhorse technologies and platforms such as Java and WebLogic. But they also have IT teams with deep heritage and operational knowledge in building, deploying, running and maintaining applications over decades. Understandably, their developers don’t necessarily want to become experts in infrastructure and in projects such as Kubernetes. They may not see the value in having novices muck around with it. As long as the developer and operations teams remain separate, they each have a measure of power and a measure of comfort.



Machines and devices are everywhere, connected—and multiplying. These are the “things” of the Internet of Things, and today there are nearly three devices attached to the internet for every human on the planet. By 2025 that ratio will soar to 10 to 1. For consumers, that means their thermostats and refrigerators can be connected to real-time, sophisticated analytics engines that automatically adjust them to be more efficient and save more money. But what does that mean for businesses? Well, just as it’s doing for consumers, IoT is helping businesses streamline operations, save money and time with real-time, actionable intelligence, and prevent problems with predictive analytics. But there’s a dark side to IoT. Frankly, it’s the concerning underbelly that exists in all connected technologies: lacking security. We already see massive DDoS attacks driven by IoT devices. Experts concede that is just the tip of the iceberg. In all, analysts project the global IoT market to exceed the $1 trillion mark in 2022. Today, companies in every industry rely on IoT as part of their business strategy. 


GDPR at a critical stage, says information commissioner


“We find ourselves at a critical stage. For me, the crucial, crucial change the law brought was around accountability. Accountability encapsulates everything the GDPR is about.” Denham said the GDRP enshrines in law an onus on companies to understand the risks that they create for others with their data processing, and to mitigate those risks. It also formalises the move away from box ticking to seeing data protection as something that is part of the cultural and business fabric of an organisation, and it reflects that people increasingly demand to be shown how their data is being used, and how it is being looked after, she added. However, she said this change is not yet evident in practice. “I don’t see it in the breaches reported to the ICO. I don’t see it in the cases we investigate, or in the audits we carry out,” she said. Denham said this is both a problem and an opportunity. “It’s a problem because accountability is a legal requirement, it’s not optional. But it is an opportunity because accountability allows data protection professionals to have a real impact on that cultural fabric of your organisation,” she said.


Gaming company boosts call center employee engagement


Many companies use design thinking to improve the customer experience. After finding it useful in the CX realm, businesses now try to apply similar approaches to improve employee engagement. Electronic Arts (EA) Inc. found this approach helpful to improve the engagement of call center employees who typically experience the brunt of customer complaints. "No one ever calls us when something good is happening," said Abby Eaton, manager of employee experience at EA. "They are calling because something has gone wrong and they are already frustrated, so the complexity of the advisers' jobs is challenging." Design thinking can help improve the design of a space, physical products and applications and has been a trend since the 1990s. Now, companies are applying this same approach to improve applications in the workplace -- cutting costs and improving worker productivity, said Parminder Jassal, Work and Learn Futures group director at the Institute for the Future, a think tank in Palo Alto, Calif.


Innovation Nation: Blockchain much bigger than Bitcoin

Blockchain. Photo / Storyblocks
Transparency works well for Bitcoin's blockchain but it might not suit say a large company's supply-chain system where it doesn't want suppliers and contractors to see each other's transactions. Immutability is a double-edged sword: if a fraudulent or erroneous transaction is recorded on the blockchain, there's no easy way to amend or delete it. The only way to fix that is to go back in time on the blockchain, and start again at that point to invalidate the transaction, provided everyone in the network agrees to do that. This effectively creates a new version of the software, and thus a new cryptocurrency that's not compatible with the older one. Not being able to delete or amend information could also make blockchain data stores incompatible with tightening global privacy rules that give individuals the right to "be forgotten" and have their details deleted if they so wish. Muir says we don't know the answer to that yet. Likewise, accessing blockchain data requires the use of a digital cryptographic key that has to be kept secure.


5 mistakes that doom a DevOps transformation from the start


The delivery pipeline in DevOps consists of feedback loops that allow you to inspect, reflect, and decide if you are still doing the right things in the right way. As you get better and smarter and learn more, you'll see ways to improve, to optimize, to cut out steps that are not providing value. Often those improvements require some investment and extra effort to implement. If you don't take the time to fix the pipeline when you see the ways to improve, you are just investing in a wasteful process. You are doing the process for the sake of the process, not to add the maximum value to what you are delivering. The sooner you improve, the sooner you reap the benefits of that improvement. It isn't just a matter of reviewing the process twice a year or every quarter. Continuous improvement is a cultural shift that says everyone should get better all the time. Every time you go through the process, you get a little better and learn a little more.


A Glimpse into WebAssembly


One of the biggest features WebAssembly has been touting is performance. While the overall performance is trending to be faster than JavaScript, the function-to-function comparison shows that JavaScript is still comparable in some benchmarks, so your mileage may vary. When comparing function execution time, WebAssembly is predicted to be about 20-30% faster than JavaScript, which is not as much as it sounds since JavaScript is heavily optimized. At this time, the function performance of WebAssembly is roughly about the same or even a little worse than JavaScript — which has deflated my hopes in this arena. Since WebAssembly is a relatively new technology, there are probably a few security exploits waiting to be found. For example, there are already some articles around exploiting type checking and control flow within WebAssembly. Also, since WebAssembly runs in a sandbox, it was susceptible to Spectre and Meltdown CPU exploits, but it was mitigated by some browser patches. Going forward, there will be new exploits. Also, if you are supporting enterprise clients using IE or other older browsers, then you should lean away from WebAssembly.


Is Hadoop’s legacy in the cloud?

Is Hadoop̢۪s legacy in the cloud? image
What many people failed to realise is that Hadoop itself is more of a framework than a big data solution. Plus, with its broad ecosystem of complementary open source projects for most businesses Hadoop was too complicated. It needed a level of configuration and programming knowledge that could only be supplied by a dedicated team to fully leverage it. Even when there was a dedicated internal team, it sometimes needed something extra. For instance, one of Exasol’s clients, King Digital Entertainment, makers of the Candy Crush series of games, couldn’t get the most out of Hadoop. It wasn’t quick enough for interactive BI queries that the internal data science team demanded. They needed an accelerator on a multi-petabyte Hadoop cluster which allowed their data scientists to interactively query the data. The world of data warehousing has changed in recent years, and Hadoop has had to adapt. The IT infrastructure of 2009-2013, when Hadoop was at the peak of its fame, differs greatly from the IT infrastructure of today.



Quote for the day:


"Leaders need to strike a balance between action and patience." -- Doug Smith


Daily Tech Digest - April 08, 2019

Node.js vs. Java: An epic battle for developer mind share

Node.js vs. Java: An epic battle for developer mind share
For all its success, though, Java never established much traction on the desktop or in the browser. People touted the power of applets and Java-based tools, but gunk always glitched up these combinations. Servers became Java’s sweet spot. Meanwhile, what programmers initially mistook as the dumb twin has come into its own. Sure, JavaScript tagged along for a few years as HTML and the web pulled a Borg on the world. But that changed with AJAX. Suddenly, the dumb twin had power. Then Node.js was spawned, turning developers’ heads with its speed. Not only was JavaScript faster on the server than anyone had expected, but it was often faster than Java and other options. Its steady diet of small, quick, endless requests for data have since made Node.js more common, as webpages have grown more dynamic. While it may have been unthinkable 20 years ago, the quasi-twins are now locked in a battle for control of the programming world. On one side are the deep foundations of solid engineering and architecture. On the other side are simplicity and ubiquity.



Why you need to align your cloud strategy to business goals

The aim here is to make sure security is supporting the business objectives and strategy. Every department in the hybrid model, in turn, decides how their security efforts contribute to the overall risk reduction and better security posture. This means setting a baseline of security controls, communicating this to all business units, and then gradually rolling out training, updating policies and setting risk, assurance and audit processes to match. While developing this baseline, however, input from various departments should be considered, as it is essential to ensure adoption. When an overall control framework is developed, departments are asked to come up with a specific set of controls that meet their business requirements and take distinctive business unit characteristics into account. This should be followed up by gap assessment, understanding potential inconsistencies with the baseline framework.


Gamification: Understanding The Basics


Games make us happy because they are hard work that we choose for ourselves. And it turns out that nothing makes us happy than good and hard work. We don’t normally, think of games as hard work. After all, we play games, and we have been taught of play as the opposite. But nothing could be further from truth. ... A game is an opportunity to focus our energy on something better. On something that will make us better. On something we are good at, or getting better at and enjoy. As mentioned above, a gameplay is the opposite of depression. And that’s why so many games are addictive. Because they are able to boost our positive thinking that we are capable of doing and achieving something. When we’re in a state of optimistic engagement, it suddenly becomes biologically more possible for us to think positive thoughts. ... real-world hard work isn’t hard enough. You read it right. We become bored and feel underutilised. And this happens specifically in bigger companies where you feel that you don’t make a big impact by doing your small work. This is one of steps from Maslow’s hierarchy — feeling appreciated for what you do.


Critical infrastructure under relentless cyber attack

“Nation-state attacks are especially concerning in the OT sector because they’re typically conducted by well-funded, highly capable cyber criminals and are aimed at critical infrastructure,” the report said. The report is based on the analyses of responses from 701 representatives of the US, UK, Germany, Australia, Mexico and Japan working in industries that rely on industrial control systems (ICS) and other forms of OT. The report revealed that cyber attacks are relentless and continuous against OT environments. Most organisations in the OT sector have experienced multiple cyber attacks causing data breaches and/or significant disruption and downtime to business operations, plants and operational equipment, with many being hit by nation-state attacks, the report said. The finding showed cyber attacks are having an effect on physical systems, according to Eitan Goldstein, senior director, strategic initiatives at Tenable. “That is a really big change and that’s why the risk isn’t just theoretical anymore,” he told the BBC


5 Cybersecurity Myths Banks Should Stop Believing

uncaptioned image
The belief among many senior execs that appointing a C-level exec to oversee a problem or challenge will take care of it or make it go away. If you need proof, consider how many companies now have a Chief analytics, AI, brand, customer, data, digital, experience, knowledge...you don't really want me to go on, do you...Officer. I'm all for a Chief Information Security Officer (CISO), but many business execs think that, by having one, that person (and IT) has the cybersecurity efforts under control. It doesn't work that way. The CISO of a $3 billion bank told me: I may be responsible for the security of the bank’s information, but it’s the executive team and functional heads who must ensure that we manage and mitigate the day-to-day operational risks of cybersecurity efficiently and effectively.” Data breaches and cyberattacks affect the entire enterprise, not just a single unit, division, or department. Decisions to mitigate these threats shouldn’t be relegated to IT. In addition, cyberincidents require communications with the institution’s customers, employees, partners, and media. The executive team and board should help script the organization's responses.


True Cybersecurity Means a Proactive Response

New cyber threats are emerging regularly and the solution to them lies in an aggressive, pre-emptive, proactive posture. Successful and secure organizations must begin to think this way if they want true data security. To do this, organizations must pivot in their security mindset and begin to implement solutions that take a comprehensive look and map all legitimate executions of an application based on the codes written by its creators, such as Microsoft and Adobe. With that map, they can identify any inconsistencies or deviation from their source code. Recognized patterns and actions can then be confirmed in real time, while unidentified activities are reviewed and blocked instantaneously. A proactive approach is a critical mindset change and an imperative if companies want to ensure they are in control of their network security. If organizations remain reactive, they will continue to consume valuable resources and risk their reputations as they chase after and remediate the mess left after the cyberattack has happened.


Migrating a Retail Monolith to Microservices

Architectural and organisational model
The autonomy principles include that a team can work and deploy independently; they should never have to wait for, or synchronize with another team. Implementation details should be hidden from other teams and failures isolated within services to make them resilient. The principles also state that for each data storage there must be exactly one service responsible. The first team rule concerning automation is that scaling must be horizontally and done automatically. Teams should also embrace a culture of automation, automation test, deploy and operations as much as possible. They are encouraged to deploy to production early and often, but also to be able to quickly rollback, in case of errors. To enable this, services must be highly observable. For all teams, communication is standardized and asynchronous where possible. For synchronous communication they use REST (maturity level 2, without hypermedia) and Kafka for asynchronous communication.


Performance-Based Routing (PBR) – The gold rush for SD-WAN

smart city iot digital transformation networking wireless city scape skyline
The inefficiency factor in the case of traditional routing is one of the main reasons why SD-WAN is really taking off. SD-WAN vendors are adding proprietary mechanisms to their routing in order to select the best path, not the shortest path. Originally, we didn't have real-time traffic, such as, voice and video, which is latency and jitter sensitive. Besides, we also assumed that all links were equal. But in today's world, we witness more of a mix and match, for example, 100Gig and slower long-term evolution (LTE) links. The assumption that the shortest path is the best no longer holds true. To overcome the drawbacks of traditional routing, we have had the onset of new protocols, such as, IPv6 segment routing and named data networking along with specific SD-WAN vendor mechanisms that improve routing. For optimum routing, effective packet steering is a must. And SD-WAN overlays provide this by utilizing encapsulation which could be a combination of GRE, UDP, Ethernet, MPLS, VxLAN and IPsec. IPv6 segment routing implements a stack of segments inserted in every packet and the named data networking can be distributed with routing protocols.


5 reasons CIO career paths go south -- and how to protect yourself


Commerce in the information age has introduced a multitude of regulations that can threaten a CIO career path. Whether it's the Sarbanes-Oxley Act, which ensures the accuracy of financial reporting, or GDPR, which protects consumer data, businesses face a plethora of regulatory requirements that inevitably require IT systems to manage. In some industries, the number and diversity of regulatory mandates has been known to cause compliance fatigue, where people start getting sloppy about compliance as the volume of requirements increases. Compliance failures can not only result in a CIO's dismissal, but they can also cause enterprise-threatening damage due to big fines, lawsuits and even criminal prosecution. Just as damaging are failures in governance, where there are no systems in place to track and enforce a company's internal policies. A perfect example is the public embarrassment Facebook had to deal with during the 2013 Cambridge Analytica scandal. 


Information Architects: Artificial Intelligence's Best Friend

future-2304558
Consider how behind the curtains of brilliant AI sit astounding designs that pave the way for instantaneous data retrieval. These pathways and storage units, though each initially the property of unique teams and business units, are integrated into a holistic framework by the efforts of Information Architects—the unsung heroes of AI—to create an enterprise-wide repository of knowledge to link departments and applications and just about anything else with clues into user behaviour. But no matter the data source, IAs must first groom input channels fed to AI systems in order to spotlight worthy patterns of interest. Everything is given an attribute and a value, and while not all data points will even contribute to an overall AI analysis, knowledge across an enterprise must nonetheless be put within accessible structures to help a system draw its own conclusions. IAs curate data according to real business needs to achieve specific, strategic solutions—and they use AI to adroitly connect the results of intelligence gathering.



Quote for the day:


"There are some among the so-called elite who are overbearing and arrogant. I want to foster leaders, not elitists." -- Daisaku Ikeda