Daily Tech Digest - February 22, 2019

Cloud Washing: How to Spot It and How to Avoid It
Cloud washing occurs when software providers attach the “cloud” label to a program in an effort to rebrand or boost sales. These programs differ from cloud-native software, which are built specifically for the cloud. Cloud washed tools take advantage of companies who want to integrate cloud solutions into their infrastructure. As such, they should be avoided at all costs. Though cloud washing is a serious problem, it can be easy to find if you know what you’re looking for. We’ve put together some tips on how to tell if a program is cloud washed and what cloud-native tools actually look like. Companies cloud wash by claiming that legacy software (old or outdated software that continues to be supported) is cloud software. Whether it’s a ploy to attract uninformed customers or evidence of a lack of cloud understanding, marketers will attach the word “cloud” to their old programs to boost sales. This tends to happen with tools that connect to the Internet.

Why AI Transformation Is Digital Transformation, Fully Realized

We now know that data collected from one channel needs to inform efforts in every other channel and that technologies that were introduced as channel-specific tools now need to work across entire organizations — something even the marketing clouds have trouble with. Because of the way marketing technology has evolved, marketers are left managing very complicated tech stacks comprised of multiple technologies, stitched together to complete what should be seamless and interconnected marketing processes. It’s no wonder that even though companies have more technology at their disposal than at any other point in history, only 39% of executives today say they feel they have the digital capabilities they need to compete. As someone who has spent the last decade reimagining how to process, analyze and act on audience, channel and tactic data at scale, I believe the introduction of artificial intelligence (AI) will be the final tipping point for marketing’s digital transformation — despite challenges that remain. Here’s how.

How elite investors use artificial intelligence and machine learning to gain an edge

"The rise of machine learning will really make our industry unrecognizable in the future," said Anthony Cowell, head of asset management for KPMG in the Cayman Islands. His clients include some of the world's largest asset managers, hedge funds and private-equity firms. For instance, Citi Private Bank has deployed machine learning to help financial advisors answer a question they're frequently asked: What are other investors doing with their money? By using technology, the bank can anonymously share portfolio moves being made by clients all over the planet. "Traditionally that kind of information was sourced from your network. You might have had a few coffees or heard about it over a cocktail," Philip Watson, head of the global investment lab at Citi and chief innovation officer at Citi Private Bank, told CNN Business. "Now, we can share insight that is very valuable."
Citi also built a recommender engine that uses machine learning tools to advise clients.

Behind-the-scenes look at 5G Evolution

Back then we thought about higher bitrates, increased spectrum efficiency, etc. We also had a few ideas that we were not able to get into the LTE standard due to backward compatibility issues – the most key feature being what we call "lean carrier". We also put energy efficiency high on the requirement list. Most importantly, we saw a need for solutions that could support all kinds of communication needs, way beyond traditional services, for example, serving the forecast massive IoT market was one of our key requirements. And, in order to really stretch our design, we added support for critical machine type communication, which is now known as URLLC (ultra-reliable low latency communication). ... 5G will also be combined with edge computing and 3rd party applications running close to the devices. On the edge, AI will be able to learn and control most of our infrastructure in smart cities and smart manufacturing in factories. So the transformational impact of 5G will be enormous compared to the previous Gs.

4 Promising Use Cases Of Blockchain In Cybersecurity

Hackers often gain access to systems by exploiting weaknesses in edge devices. These include routers and switches. Now, other devices such as smart thermostats, doorbells, even security cameras are also vulnerable. Simply put, the rigorousness is often not applied when ensuring whether these IoT devices are secure. Blockchain technology can be used to protect systems, and devices from attacks. According to Joseph Pindar, co-founder of the Trusted IoT Alliance, blockchain can give those IoT devices enough “smarts” to make security decisions without relying on a central authority. For instance, devices can form a group consensus regarding the normal occurrences within a given network, and to lockdown any nodes that behave suspiciously. Blockchain technology can also protect all the data exchanges happening between IoT devices. It can be used to attain near real-time secure data transmissions and ensure timely communication between devices located thousands of miles apart.

Thales to sell nCipher to Entrust Datacard

This deal with Entrust Datacard is expected to close during the second quarter of 2019, subject to the successful completion of the acquisition of Gemalto by Thales and the approval of Entrust as a suitable purchaser by the European Commission, US Department of Justice, Australian Competition and Consumer Commission, and New Zealand Commerce Commission. Thales said the deal will enable nCipher Security, which has more than 300 employees and reported more than €100m in revenues in 2018, to “continue to deliver innovative solutions and services and strengthen its market leadership”. It added that Entrust Datacard is a global leader in public key infrastructure (PKI) solutions and services, and the primary use case for GP HSMs in protecting infrastructure private keys such as root and issuing certification authorities keys. “This makes Entrust Datacard the ideal organisation for Thales to divest this business, ensuring its leadership position in the GP HSMs market and providing trust, integrity and control to business-critical applications,” the company said.

How AI can help to prevent the spread of disinformation

Disinformation has spawned a new sub-industry within journalism, with fact checkers working around the clock to analyse politicians’ speeches, articles from other publications and news reports, and government statistics among much else. But the sheer volume of disinformation, together with its ability to multiply and mutate like a virus on a variety of social platforms, means that thorough fact-checking is only possible on a tiny proportion of disputed articles. While technology has provided the seedbed and distribution for disinformation, it also offers a solution to the issue. Artificial intelligence in particular offers powerful tools in the fight against disinformation, working on multiple levels to identify dubious content. These techniques are broadly split between content-based and response-based identification. The former works much like a human fact checker, by matching the content of an article with trusted sources of information to highlight errors or outright lies.

10 Principles for Modernizing Your Company’s Technology

Use cross-functional teams to plan and design this modernization effort. Functional experts from areas such as IT, strategy, R&D, customer interaction, and operations can all work together in an agile “sandbox” environment to design the changes around a set of coordinated specifications. In this early stage, and throughout the initiative, you thus link leading-edge knowledge of the changing technology with deep, day-to-day awareness of the desired results. As you bring these teams together, you will establish a shared frame of reference — a common language to describe the features you want and the capabilities you are building. This also will help engage new stakeholders as they join in the effort. A major transportation company revamped its online system this way, improving the integration between the website that handled passenger bookings and the back-office functions that, among other things, routed travel. In its intensive sandbox sessions, the company set up temporary cross-functional working groups, which became known as “tribes.”

Cisco warns on HyperFlex security vulnerabilities

3 patch training update software band aid laptop with virus binary
“An attacker could exploit this vulnerability by connecting to the cluster service manager and injecting commands into the bound process,” Cisco wrote in its Security Advisory. Cisco says that the vulnerability is due to insufficient input validation in Cisco HyperFlex software releases prior to 3.5. Such input can impact the control flow or data flow of a program and cause a number of resource control problems. Cisco has released a software update to address this vulnerability and said that there are no other workarounds to address this exposure. The second vulnerability – rated 8.1 on Cisco's scale – is a snafu in the hxterm service of Cisco HyperFlex Software that could let an attacker connect to the service as a non-privileged, local user. A successful exploit could allow the attacker to gain root access to all member nodes of the HyperFlex cluster in Cisco HyperFlex software releases prior to 3.5, according to the security advisory.

How and why the data privacy mandate is expanding

A battle is also brewing in the US over state and federal privacy laws. Several states have passed laws aimed at data privacy and ethical use. The most prominent and restrictive of these is the California Consumer Privacy Act of 2018 -set to take effect in 2020 and billed to be the toughest data privacy law in the country (incorporating many GDPR-like restrictions). Many companies have lobbied against this and other state bills, pushing for less restrictive measures and asking that a uniform federal law supersede all state legislation. To this end, both the US Chamber of Commerce and the Internet Association, which represents companies like Amazon, Facebook, Google, and Twitter, have released their own recommendations for a federal bill. The Data Care Act introduced by a group of US senators, a competing congressional bill, The Information Transparency and Personal Data Control Act, and the White House recommendations round out the plethora of proposals.

Quote for the day:

"The ultimate measure of a man is not where he stands in moments of comfort, but where he stands at times of challenge and controversy." -- Martin Luther King, Jr.

Daily Tech Digest - February 21, 2019

Data Mining — What, Why, How?

Data mining sits at the intersection of statistics (analysis of numerical data) and artificial intelligence / machine learning (Software and systems that perceive and learn like humans based on algorithms) and databases. Translating these into technical skills leads to requiring competency in Python, R, and SQL among others. In my opinion, a successful data miner should also have a business context/knowledge and other so called soft skills (team, business acumen, communication etc.) in addition to the above mentioned technical skills. Why? Remember that data mining is a tool with the sole purpose of achieving a business objective by accelerating the predictive capabilities. A pure technical skill will not accomplish that objective without some business context. The following article from KDNuggets proves my point that data mining job advertisements mentioned the following terms very frequently: team skills, business acumen, analytics among others. The same article also has SQL, Python and R at the top of the list as technical skills.

Two Sides of a Coin: Blockchain, Ethics and Human Rights

What does it mean to say that a technology is evil? Given Krugman’s arguments, it’s easy to see what he meant: bitcoin is used exclusively for acts which are morally bad; hence, bitcoin is itself evil. As an ethical argument, this is willfully ignorant; you don’t need a Nobel Prize to find examples of blockchain being used for social good. But, interestingly, the underlying thought pattern – that bitcoin is evil because it brings about bad consequences– is an example of a legitimate moral theory known as consequentialism. If Krugman was arguing along consequentialist lines, his error lies in disregarding bitcoin’s positive aspects and in the failure to make the assumption of this ethical framework explicit.  Intrigued, we started searching the academic databases for ethical frameworks applied to blockchain, but found nothing. Yet we kept finding controversies surrounding certain blockchain use cases which relied implicitly on the ethical frameworks that philosophers have developed over thousands of years.

Zuckerberg Eyeing Blockchain For Facebook Login And Data Sharing

In the interview, Zuckerberg said that authentication was a use of blockchain that he is potentially interested in. However, he caveated it by saying: “I haven’t found a way for this to work.” He added: “You basically take your information, you store it on some decentralized system, and you have the choice of whether to log in in different places, and you’re not going through an intermediary.” “There’s a lot of things that I think would be quite attractive about that. For developers, one of the things that is really troubling about working with our system, or Google’s system for that matter, or having to deliver services through Apple’s App Store is that you don’t want to have an intermediary between serving the people who are using your service and you.” “Where someone can just say 'hey, we as a developer have to follow your policy and if we don’t, then you can cut off access to the people we are serving'. That’s kind of a difficult and troubling position to be in.”

Power over Wi-Fi: The end of IoT sensor batteries?

Power over Wi-Fi: The end of IoT sensor batteries?
The researchers believe that harvesting 150 microwatts of power (the power level of a typical Wi-Fi signal) with one of the rectennas could produce around 40 microwatts of electricity—enough to power a chip. Scaling the system to a vehicle, data center hall, or similar-sized setup, which they say is possible in part because their MoS2 material is thin and flexible, would conceivably generate commensurate power. The researchers also say the non-rigid, battery-free system is better than others’ attempts at rectennas because they capture “daily” signals such as “Wi-Fi, Bluetooth, cellular LTE, and many others," says Xu Zhang, of collaborator Carnegie Mellon University, in the article. The other Radio Frequency-to-power converters, which are thick and non-flexible, aren’t wideband enough, the groups say. Of course, radio waves already power some chips. RFID tags are an example. But those solutions are limited in their power and, therefore, range and bandwidth, which is why the search is on for something better.

UK committed to working with EU cyber security partners

Within the cyber security sphere, Martin said it was “objectively true” that nearly all the functions of the NCSC fall outside the scope of EU competence. “It follows that our enhanced cooperation with European partners, and the EU as a whole, in cyber security over recent years is not automatically affected by the UK’s changing relationship with the EU,” he said. “Pretty much everything we do now to help European partners, and what you do to help us, on cyber security can, should, and I am confident will, continue beyond 29 March.” In the past, said Martin, the UK has shared classified and other threat data with EU member states and institutions and played a role in the development of European thinking in areas such as standards and incident response.

What organizations can do to mitigate threats to data management

Adding granular encryption with BYOK (Bring Your Own Key) is an effective weapon in breach prevention. If even an administrator or engineer who manages data in an organization cannot read that data, a hacker will be stopped cold – he may be effective in stealing the data, but not in using it for his own gain. Threats to cybersecurity are considerable and are becoming worse with the proliferation of big data and its use in AI. Good practices raise awareness of cybersecurity risks and help organizations create robust, reliable and fast disaster recovery plans (DRPs) in advance. And, organizations can gain by using AI to monitor systems, detect vulnerabilities, and bridge those vulnerabilities, turning AI into a strategic asset. Many organization's cloud data environments lack the technology for the effective automation of data privacy compliance, and they find it challenging to meet the requirements of the most stringent regulation for data protection, GDPR.

How to recruit top tech talent: Do's and don'ts

Dice Editor Nate Swanner said they were surprised that remote work rated so highly on the list and added that "tech pros can see through the pizazz: A flashy job title, dedicated parking spot and a fresh MacBook Pro won't cumulatively overcome great health benefits or remote work." Research firm Gartner has found that things may not be so simple, though: Benefits like healthcare may be highly desired, but they're also basic expectations for job seekers. "Instead, candidates want to know which benefits set the organization apart," Gartner said, noting that educational benefits, well-being initiatives, and innovative perks are far more likely to attract top talent. Giving credence to Gartner's argument is its research on the types of benefits mentioned in a job posting v. how much time that posting remains up. Mentions of medical care, employee well-being, and work-life balance had zero impact on how long a posting goes unfilled, while dental/vision coverage, financial benefits, family programs, and disability/life insurance all significantly reduced the amount of time it took to fill a job.

Move over HR: Why tech is taking charge of company culture

The key lesson, says Lewis, is that the broader organisation sees the plus-points that a new way of working brings and then demands similar benefits. "In the same way that it happened in the IT industry in terms of Scrum and Agile, I think people have started to realise that smaller, cross-functional teams can add value in other areas of the business," he says. Lewis, therefore, posits a change in perception, one that holds non-IT executives are recognising that digital chiefs have broad expertise that can help change the business for the better. Board members who call on their CIOs for advice on people and processes find new ways to overcome the cultural challenges associated to transformation. That view resonates with Brad Dowden, interim CIO and director at Intercor Transformations. He says the experience digital leaders have of running transformation programmes definitely leaves them well-placed to advise the rest of the organisation — including HR chiefs — about the best ways to pursue successful culture change initiatives.

Breaking the chains: How FUD is holding the cyber sector hostage

The biggest cyber danger for companies is not the CFO getting hacked by Chinese wizard-class hackers using an offensive AI-driven quantum virus via blockchain – it’s someone from the accounts team, clicking on that phishing email link because he did his mandatory corporate security training seven months ago and has forgotten to double-check the URL. It could also be someone from the development team facing a tight deadline and nabbing some code from GitHub, without having the time to really read through it and find that remote shell buried in line 2,361. Suppliers can hype and sensationalise the capabilities of their products, and the scale of the threat, but ultimately all they are doing is damaging customers’ trust – the trust that is vital for a company to know that its cyber security strategy is based on a proportional and relevant response to the threats it faces as an organisation.

Using Contract Testing for Applications With Microservices

What makes contract testing awesome is that it does this in a way which really fits well into a microservice workflow, said Groeneweg. The most important thing is that it decouples the test between the service who’s using the API (consumer) and the API itself (provider). This allows you to bring them both to production without needing the other. It’s especially useful when they are maintained by different teams because it enabled them to be autonomous in testing and releasing.
Groeneweg stated that contract testing is a way of reducing the risk of integration bugs. Also, contract testing is a lot faster than other ways of integration testing. That’s important as it allows you to decrease lead time and kill waste which is caused by slow feedback from tests, he said. As the consumer defines the contract, contract testing also leads to better interfaces and APIs that are actually used.

Quote for the day:

"The key to successful leadership today is influence, not authority." -- Ken Blanchard

Daily Tech Digest - February 20, 2019

Excessive Permissions are Your #1 Cloud Threat

IT administrators and hackers now have identical access to publicly-hosted workloads, using standard connection methods, protocols, and public APIs. As a result, the whole world becomes your insider threat. Workload security, therefore, is defined by the people who can access those workloads, and the permissions they have. ... One of the primary reasons for migrating to the cloud is speeding up time-to-market and business processes. As a result, cloud environments make it very easy to spin up new resources and grant wide-ranging permissions, and very difficult to keep track on who has them, and what permissions they actually use. All too frequently, there is a gap between granted permissions and used permissions. In other words, many users have too many permissions, which they never use. Such permissions are frequently exploited by hackers, who take advantage of unnecessary permissions for malicious purposes. As a result, cloud workloads are vulnerable to data breaches, service violation (i.e., completely taking over cloud resources), and resource exploitation.

The most advanced hacking groups are getting more ambitious

Groups like Chafer, DragonFly, Gallmaker and others are all conducting highly-targeted hacking campaigns as they look to gather intelligence against businesses they think hold valuable information. Once attackers might have needed the latest zero-days to gain access to gain entry into corporate networks, but now it's spear-phishing emails laced with malicious contents which are most likely provide attackers with the initial entry they need. And because these espionage groups are so proficient at what they do, they have well tried-and-tested means of conducting activity once they're inside a network. "It's like they have steps which they go through which they know are effective to get into networks, then for lateral movement across networks to get what they want," Orla Cox, director of Symentec's security response unit told ZDNet.

Why blockchain may be blockchain’s best cybersecurity option

Chains of binary data.
Developers should take the initiative to build their own decentralized security applications for anti-phishing, anti-malware, intrusion detection and distributed VPNs to deploy on the global blockchain. The bottom line is that it’s not enough to just trust blockchain’s security because of more transparency than other technological data security and privacy methods. Developers, miners and even enterprises need to look at the entire digital ecosystem when considering security, as every single point provides savvy hackers a weak link to exploit. As blockchain investment continues to skyrocket and the crypto markets continue to diversify – even with the recent slowdown – we will see more unique and sophisticated examples of cyber criminals penetrating blockchain’s security veneer. That’s the paradoxical ratio of technology: for as many positive innovations that tech creates, there almost is an equal amount of sinister “innovations” to match. This is most certainly true regarding blockchain. The key is to keep discussing threats to blockchain to inspire those securing it.

How Estonia became an e-government powerhouse

Estonia is among the elite group of countries in the highest echelons of the UN's E-Government Development Index (EDGI), with its citizens and public servants able to access a wide range of services online using secure digital IDs, including making payments, accessing full health records, and internet voting. Estonia has been building out its e-government since the mid-90s, not long after declaring independence from the Soviet Union. The program continues to make headlines with bold new digital initiatives, such as its e-residency program, which gives anyone living anywhere in the world the ability to receive a government-issued digital ID and full access to Estonia's public e-services. Today, 99% of the public services are available online 24/7, 30% of Estonians use i-Voting, and the country estimates the reduced bureaucracy has saved 800 years of working time.

The 11 biggest issues IT faces today

The 12 biggest issues IT faces today
“Security professionals must be extra vigilant with detection and training against these threats,” says John Samuel, CIO at CGS. “This year, companies will need to introduce AI-based protection systems to be able to contain any such attacks introduced by this next-gen tech.” Grinnell says AI wasn’t a factor in the most notable attacks of the last year, but he expects that to change. “I believe 2019 will bring the first of many AI-driven attacks on U.S. companies, critical infrastructure and government agencies,” he says. “Let’s hope I’m wrong.” Forward-thinking organizations are now implementing privacy by design in their products, but making sure those efforts meet GDPR standards is an ongoing concern. Google, for example, just saw a record fine by French regulators over how the company collects data. “U.S. businesses will need to consider a GDPR-type policy to protect citizens even before any regulations are enacted,” Samuel says. “Ultimately, there must be international guidelines to ensure customer privacy and protection on a global scale to allow for easier compliance.”

Setting expectations and preparing for a new breed of cyberattacks

Lateral movement is a method used by cyberattackers to move through a network, as they search for the essential data that is the eventual target of the breach. Continuing to hide in plain sight, cybercriminals are leveraging non-malware / fileless attack methods to do this, which is the biggest indicator that attackers aren’t just focused on one component of an organization, but are seeking additional targets as they infiltrate the network. In order for today’s organizations to prepare for these threats to security, they first need to solve the problem of visibility. True endpoint visibility should allow you to “turn back the clock” and see exactly what happened on the endpoint at a specific date. To understand how significant this capability is, we found that an organization with 10,000 endpoints is estimated to see more than 660 attempted cyberattacks per day.

Can work allocation algorithms play fair?

Allocating work by algorithm is not an inherently bad idea, according to James Farrar, chair of United Private Hire Drivers, a branch of the Independent Workers Union of Great Britain, who has co-led legal action against Uber for drivers’ rights. Many drivers working for conventional minicab companies pay a fee for the privilege (Uber takes a percentage, typically 20-25%), only to see controllers giving the best jobs to their friends, ordering them to collect their takeaway food and even demanding bribes. “People didn’t just walk away from those operators, they ran to Uber,” Farrar says. But the company’s algorithms create their own problems. Uber tells prospective drivers that “there’s no office and no boss”, adding that “with Uber, you’re in charge”. Farrar says this is not reflected in the ways the company’s algorithms allocate jobs and influence driver behaviour.

Unactioned data subject access requests could lead to legal action

A Talend report published in September 2018 found that only 30% of organisations are able to fulfil DSARs within the GDPR’s 30-day deadline. This shows how difficult it is to maintain an effective DSAR process. Requests have increased substantially since the GDPR took effect, while the deadline to respond has decreased and the amount of information that must be provided has increased. It’s no surprise, therefore, that many organisations are looking for help. The GDPR DSAR Support Service, provided by our sister company GRCI Law, is a perfect example of how you can simplify the process. GRCI Law’s experienced data privacy lawyers and DPOs (data protection officers) will manage the process on your behalf to ensure that requests are completed in accordance with the GDPR’s requirements.

How managed network services are evolving to simplify the global WAN

How managed network services are evolving to simplify the global WAN
The first step toward evolving the managed network services market was network function virtualization (NFV). “When the service providers were facing the need to streamline their operation, move faster, respond faster, they took an approach of virtualizing appliances,” says Yovel. “Think about all the different network functions that used to be in the old network—next-generation firewalls, various orchestration solutions, VPN solutions, and so on. They virtualized all these boxes, but that didn't change the core dynamic of the network itself. Each function coming from different vendors still had its own management interface, plus its own scaling and sizing environment. The fact the appliance was virtualized didn’t change that. They still had the same problem with the centralized architecture as in the past.” Consider the example of virtualizing a firewall. Mobile users still need to connect over the internet over long distances to some firewall in some location to get the security they need. The fact that the firewall is virtualized doesn’t change that dynamic.

Microservices With CQRS and Event Sourcing

Microservices are independent, modular services that have their own layered architecture. When microservices share the same database, the data model among the services can follow relationships among the tables associated with the microservices. ... A shared database is not recommended in a microservices-based approach, because, if there is a change in one data model, then other services are also impacted. As part of microservices best practices, each microservice should have its own database. ... The limitation of this approach is that transaction management cannot be properly handled. If customer data is deleted, the corresponding order also has to be deleted for that customer. Though this can be achieved with workarounds, like calling a delete service in the Order service, atomicity is not achievable in a straight forward way. This needs to be handled with customization.

Quote for the day:

"The world_s greatest achievers have been those who have always stayed focussed on their goals and have been consistent in their efforts." -- Roopleen

Daily Tech Digest - February 19, 2019

The role of open source in networking

hello my name is open source nametag
The biggest gap in open source is probably the management and support. Vendors keep making additions to the code. For example, zero-touch provision is not part of the open source stack, but many SD-WAN vendors have added that capability to their product. Besides, low code/no code coding can also become a problem. As we now have APIs, users are mixing and matching stacks together and not doing raw coding. We now have GUIs that have various modules which can communicate with a REST API. Essentially, what you are doing is, you are taking the open source modules and aggregating them together. The problem with pure network function virtualization (NFV) is that a bunch of different software stacks is running on a common virtual hardware platform. The configuration, support, and logging from each stack still require quite a bit of integration and support. Some SD-WAN vendors are taking a “single pane of glass” approach where all the network and security functions are administered from a common management view.

The Internet Has A New Problem: Repeating Random Numbers!

To understand the problem it’s useful to take a 30 second tutorial on Digital Certificates. For those of you who might have managed to stay awake during math class you’ll remember that Asymmetric Cryptography utilizes 2 prime numbers to create a Public and Private Key for a Digital Certificate. The Public Key maps an input (that you want to keep secret) to a large number field while the Private Key reverses the transaction. The theory goes that since there’s an infinite set of prime numbers, there’s an infinite set of Public/Private key combinations. To make sure the prime numbers are different a Random Number Generator (RNG) is used. Sounds pretty secure. Infinite is a big number. What could go wrong? Well the real world is a bit different than math class. It seems the random number generators (RNG) on computer devices really don’t generate an infinite set of primes but rather a bounded set that in turn generates a set of Public/Private Key combinations.

Understanding the Darknet and Its Impact on Cybersecurity

Uses of the darknet are nearly as wide and as diverse as the internet: everything from email and social media to hosting and sharing files, news websites and e-commerce. Accessing it requires specific software, configurations or authorization, often using nonstandard communication protocols and ports. Currently, two of the most popular ways to access the darknet are via two overlay networks. The first is the aforementioned Tor; the second is called I2P. Tor, which stands for “onion router” or “onion routing,” is designed primarily to keep users anonymous. Just like the layers of an onion, data is stored within multiple layers of encryption. Each layer reveals the next relay until the final layer sends the data to its destination. Information is sent bidirectionally, so data is being sent back and forth via the same tunnel. On any given day, over one million users are active on the Tor network. I2P, which stands for the Invisible Internet Project, is designed for user-to-user file sharing. It takes data and encapsulates it within multiple layers. 

Tap compare is not something that attempts to act as a direct substitute for any other testing technique -- you will still need to write other kinds of tests such as unit tests, component tests or contract tests. However, it can help to you detect regressions so that you can feel more confident about the quality of the new version of the developed service.  But one important thing about tap compare is that it provides a new layer of quality around your service. With unit tests, integration tests, and contract tests, the tests verify functionality based on your understanding of the system, so the inputs and outputs are provided by you during test development. In the case of tap compare, this is something totally different. Here, the validation of the service occurs with production requests, either by capturing a group of them from the production environment and replaying them against the new service, or by using the mirroring traffic technique where you shift production traffic to be sent to both the old version and to the new version, and you compare the results.

Why are IoT platforms so darn confusing?

Why are IoT platforms so darn confusing?
An Internet of Things (IoT) platform is the support software that connects edge hardware, access points, and data networks to other parts of the value chain (which are generally the end-user applications). IoT platforms typically handle ongoing management tasks and data visualization, which allow users to automate their environment. You can think of these platforms as the middleman between the data collected at the edge and the user-facing SaaS or mobile application. That last line is key because to me, an IoT platform is little more than a fancy name for the middleware that connects everything together. i-Scoop focuses on that aspect: “An IoT platform is a form of middleware that sits between the layers of IoT devices and IoT gateways (and thus data) on one hand and applications, which it enables to build, on the other.” Perhaps, though, IoT platform vendor KAA offers the most honest description. While acknowledging the middleware aspect, the vendor also allows that “an IoT platform can be wearing different hats depending on how you look at it.”

Kaspersky Lab Launches New Threat Intelligence Service

Kaspersky Lab Launches New Threat Intelligence Service
Sergey Martsynkyan, Head of B2B Product Marketing at Kaspersky Lab, provided some insights into the new threat intelligence service. “Being aware of the most relevant zero-days, emerging threats and advanced attack vectors is key to an effective cybersecurity strategy.” “However, manually collecting, analyzing and sharing threat data doesn’t provide the level of responsiveness required by an enterprise. There’s a need for a centralized point for accessible data sources and task automation.”  According to Kaspersky Lab, one-third of enterprise CISOs feel overwhelmed by threat intelligence sources. Moreover, they also tend to struggle with connecting their threat intelligence with their SIEM solution. The Kaspersky CyberTrace indicates the blurring lines between the different disciplines of cybersecurity; in addition, the new service highlights the growing importance of threat detection and remediation in the modern cybersecurity paradigm; a prevention-based model often leaves cyber-attacks to dwell on enterprise networks and wreak havoc in the digital background.

RPA: the key players, and what’s unique about them

RPA: the key players, and what’s unique about them image
Of all the key specialist key players in RPA, Blue Prism is the only company listed on the stock market. We spoke to Pat Geary, the company’s Chief Evangelist. Pat has an interesting claim to fame in this space, for it was he who first came up with the phrase RPA. Blue Prism puts a quite different emphasis on RPA, indeed it goes further and argues that a lot of the ‘claimed’ players in the RPA space are not actually RPA companies at all — rather they sell what he calls RDA — robotics desktop automation. When we spoke to Mr Geary, he put emphasis on the word guardian: Operational security guardians, resilience and backup guardians, audit guardians and governance guardians.” He says that what he calls RDA bypasses these guardians — “sneaking stuff in without passing the guardians.” He says Blue Prism as providing a fortress RPA, “it’s absolutely bullet-proof,” he says. He likens the Blue Prism solution to a padded room — an area that is safe, allows for experiment, the “business can do whatever they like in there, but they’re not going to break anything.”

Three Pillars with Zero Answers: Rethinking Observability with Ben Sigelman

The big challenge associated with metrics is in dealing with high cardinality. Graphing metrics often provides visibility that allows humans to understand that something is going wrong, and then the associated metric can often be explored further by diving-deeper into the data via an associated metdata tag e.g. user ID, transaction ID, geolocation etc. However, many of these tags have high cardinality, which presents challenges for querying data efficiently. The primary challenge with logging is the volume of data collected. Within a system based on a microservices architecture, the amount of logging is typically the multiple of the number of services by transaction rate. The total cost for maintaining the ability to query on this data can be calculated by further multiplying the number of total transaction by the cost of networking and storage, and again by the number of weeks of retention required. For a large scale system, this cost can be prohibitive.

Digital transformation in healthcare remains complex and challenging

Digital transformation
Whilst progress has been made in digital healthcare, it hasn’t necessarily been transformational and in many cases is a simple conversion of analogue to electronic. Certainly the areas of eReferral, ePrescribing and eHealth Records haven’t undergone revolutionary change, they’re simply the transference of what were analogue forms and processes into electronic versions of the same. In healthcare transformation so many processes remain ripe for digital disruption. We’re heading into the post-digital era where healthcare organisations will need to adopt new and emerging technology. These new technologies will drive change in an environment where the sector already has a multitude of existing digital tools. The new technology that is already appearing in healthcare includes artificial intelligence, distributed ledger technology, extended reality and quantum computing. Most industries that have undergone digital transformation have done so by adopting a data-driven approach. In healthcare we’re entering an era where data will be generated at scale.

What You Need to Know about Modern Identity Security

What You Need to Know about Modern Identity Security
At its core, modern identity and access management platforms must handle provisioning, deprovisioning, and modifying user access from a central network location. Provisioning refers to giving initial permissions to an employee when they first enter your workforce. Deprovisioning, in turn, refers to removing all of the permissions from an employee’s account when they leave your employ. An IAM solution should also help you evaluate and adjust through role management the permissions your employees have as they change roles and position during their employ with your enterprise. You should consider all three of these capabilities absolutely necessary for your enterprise. Limiting the permissions individual users possess often proves the best way to prevent a security threat from taking hold; it prevents the damage a stolen password can do and limits the likelihood of an insider threat. A modern identity security threat should also allow your IT security team to mandate a certain level of password complexity.

Quote for the day:

"Trust is the lubrication that makes it possible for organizations to work." -- Warren G. Bennis

Daily Tech Digest - February 18, 2019

When It’s Okay to Fail

When It's Okay to Fail
To bring this concept to life, let’s use a real-world case. A good example is Netflix. In 2011, the company began experimenting with Simian Army, a “safe to fail” suite of tools that now reside within the Amazon Web Services suite of products and is a key tool in devops installations. Building the Simian Army was critical to Netflix’s ability to operate in a cloud-based environment fraught with potential interruptions while continuing to deliver reliable products and services. Some of the components of the Simian Army include Chaos Monkey, which tests for random failures; Latency Monkey, which introduces artificial delays and tests for the system’s return to normal runtime; and Conformity Monkey, which finds non-conforming instances and shuts them down even if they appear to perform well initially. Finally, Chaos Gorilla, at the top of the Simian Army hierarchy, simulates an outage of an entire Amazon availability zone to see how the Netflix system will handle it.

Is Blockchain Technology Overhyped?

What makes the design of blockchain so clever, De Filippi and Wright note, is its solution to an obvious concern with the description above: Why trust the network? After all, the network is open to anyone and can be joined pseudonymously. What is stopping someone from creating hundreds of Bitcoin accounts and voting to impose a bogus “consensus” on the state of the database? The technology solves this problem by making the task of adding a block to the chain into a competition: Any Bitcoin user who is interested can try to solve, by brute trial and error (and with the help of a high-powered computer), a mathematical puzzle generated by the Bitcoin software. If you find the solution, you broadcast it to the rest of the network; if a majority of the network agrees you’ve solved the puzzle (it’s simple to confirm), you receive a payment (in the form of newly minted or “mined” Bitcoin) and the block is added to the chain.

When decision makers view the future as abstract, decisions involving saving or learning that could be beneficial in the longer term, for example, are put off. This short-term perspective becomes a real problem when people need to learn new skills and adapt their behaviors. HR leaders should take an elevated view of time and think in the longer term. This helps people consider both now and later, making the future less abstract and pulling potential opportunities into the present. Becoming a broker of time means viewing time as an asset and thinking about how HR can help its people reap the benefits of effective investment in it. This view will also help HR leaders support their CEO in building a narrative on the future of work. There is no doubt that corporate leaders are often under immense pressure, so the focus on the short term outweighs the long term. But employees want and need a long-term perspective about their work, and it is the role of leaders to build this narrative.

WCF for the Real World, Not Hello World

WCF provides inherent supports for such separations. For some reasons, those templates coming with VS hide and limit such power of WCF, though they are good enough to deliver your first WCF solution ASAP, and get applause from your boss. However, if you want to deliver elegant solutions for complicated problems in efficient way, you had better follow the SOLID OOD principles and separate concerns as much as possible. Otherwise, you will have to work harder, rather than smarter, when the project needs to evolve and resolve more complicated problems. In a typical enterprise application, classes of different life cycles had better stay in different packages, or in different Visual Studio solutions, or even in different revision control repositories. Planning for such arrangement is essential for improving maintainability and flexibility in order to lower the costs and improve productivity and quality. In addition, the build time could be reduced.

Everything you need to know about the Chief Information Officer explained

When something technical goes wrong at the weekend, it is likely to be the CIO who gets the call, even if someone else ultimately has the duty of fixing the problem. While the rest of the business talks about the importance of game-changing digital transformation, most CIOs recognise they will be judged first and foremost on their ability to do the basics right. A grand e-business strategy will fail if the CIO neglects to ensure that cybersecurity or business continuity is taken seriously, for example. As a C-level executive, the CIO is responsible for setting the IT strategy and ensuring that this works with the broader business strategy. In many digital businesses the IT strategy will be the main element driving the business strategy. This means the CIO needs be able to understand the broader business requirements and which to prioritise through the use of technology. Another big role for the CIO is building and maintaining an effective and motivated team.

Overcoming RESTlessness

So how can REST's value evolve in this new paradigm? There are an increasing number of organizations adopting an "API First" approach to software development; that is, emphasizing the importance of designing the machine interfaces in their applications and services to the same extent as UI's, and using those APIs to decouple the development efforts of teams responsible for different domains. OpenAPI often plays an important role in this methodology, as the implementation-agnostic interface specification. In accordance with the post-Web paradigm, this benefits the various people involved in building or modifying the software system. There is already a project underway -- AsyncAPI from Fran Mendez -- that aims to bring this same value to event-based interactions. Along the same lines, Mike Amundsen and Leonard Richardson introduced the ALPS specification to capture the semantics of network-based application interactions. Efforts like these help to address the design-time challenges of building distributed systems.

CISOs under increasing pressure, study shows

“It’s no surprise that CISOs are facing burnout. Many lack support from within their organisations, and senior business leaders need to face the facts: the threats are real, and CISOs need to be given the resources and support to tackle them. If not, the board must face the consequences. “The risk is not only personal to a CISO, but to a business’s hard-won reputation. The growing economic cost is also a worrying trend. A recent report put the cost of global cyber crime at $600bn in 2017. With that cost likely to rise in the future, we must all work harder, and cooperatively, to mitigate potential losses by having the right strategy, tools and resource in place to prevent breaches in the first place.” Dimitrios Tsivrikos, a business psychologist and lecturer at University College London, said is it of “paramount importance” to address organisational stress. “Extra emphasis ought to be paid to CISOs,” he said.

Is Cybersecurity the Same as Data Privacy?

Cybersecurity is a set of strategies, techniques, and controls to reduce risk and ensure that your data assets are protected. If data privacy is about control, then cybersecurity has the means to add, some, but not all, of the aspects of that control. Cybersecurity is at the heart of the discipline of data protection. Protection of assets in all the forms they take. Like privacy, cybersecurity is a process. It requires an understanding of the threat landscape to create policies, processes and procedures and then put the tenets of those efforts into practical application. Every aspect of our working and personal lives is touched by cybercrime. And, the cybersecurity space is a buoyant place, reflecting this. The cybersecurity industry is expected to grow over 10% annually to be worth $248 billion by 2023. Cybersecurity covers a wide-gamut of ways to protect our organizations and ourselves from cyber-attacks, whether they be from inside our company or from external threats. Cybersecurity is not just about the protection of data.

Continuous Integration Process: How to Improve Software Quality and Reduce Risk

Continuous Integration process
In the Continuous Integration process, we had talked about running a second build pertaining to the mainline code. This build happens on an integration machine. You might wonder why? As the testers, we encounter situations where bugs are seen only on a particular environment and not in another. This is exactly the reason a mainline build is run on an integration machine. Sometimes integration machine will spring up surprises that didn’t exist in a developer’s local system. Human errors such as not synching your code with the mainline will also show up here. Therefore only once it builds successfully here, the commit can be declared As the testers, we’re so familiar with environment-related defects. Production systems have their own configurations in terms of database levels, Operating system, OS patches, libraries, networking, storage, etc. It is a good practice to have the test environment as close as possible to prod system, if not an exact replica. Any major discrepancies and risks can be easily identified this way before it actually hits production systems.

Why blockchain has caught the fancy of IIT-B

Indian Institute of Technology, Bombay (IIT-B), signed an agreement with US-based Ripple Labs Inc., to create a centre of excellence to support academic research, technical development and innovation in blockchain, cryptocurrency and digital payments. IIT-B, thus, became one of the 17 universities across the world to benefit from the $50-million Ripple fund for its global University Blockchain Research Initiative (UBRI). “The idea is to create the next generation of students and entrepreneurs," says Navin Gupta, managing director, South Asia and MENA (Middle East and North Africa), Ripple. The partnership is expected to enable IIT-B’s faculty and students with opportunities for research and technology development in blockchain and cryptocurrency, which could add value to the global blockchain ecosystem, as well as industries such as fintech, professor Devang V. Khakhar, director, IIT-B, had said after signing the agreement. While Ripple will provide a grant, strategic guidance and technical resources, IIT-B already has a centre of excellence on blockchain that aims to understand prevalent blockchain platforms 

Quote for the day:

"Uncertainty is not an indication of poor leadership; it underscores the need for leadership." -- Andy Stanley

Daily Tech Digest - February 17, 2019

Inside the “hive-mind”: how AI-powered drone “swarms” can benefit society image
The next step has primarily been enabled through the rise of fog computing. This model involves drones flown autonomously and transmitting only relevant data for analysis in real-time. Rather than waiting for the drone to land to download the data it captured, businesses are able to pre-program drones with specific flight plans, allowing them to fly autonomously. As they fly, their IoT-enabled sensors feed data to the fog node on board for processing and analysis and only the exceptions or alerts are transmitted to the cloud. Thus, drones can be now used in applications where time is of the essence, and real-time insights are critical. For example, in the event of a flood, first responders could send a drone into a flood zone to look for stranded survivors. Each drone would be pre-programmed to fly over their designated patch of the flooded area, reporting back sightings of stranded people or animals in real-time. The data can then be “stitched” together and analysed at a central location so emergency response teams can create the optimal evacuation plan.

How to Avoid Failing at Mobile Test Automation

Management and some developers (especially backend ones) think that by having E2E UI tests, running against the real environment in all real-life situations can be covered. In addition, they think that those tests will cover the absence of API tests, backend, and client integration tests, which is wrong. There are so many things that cannot be tested on mobile because of platform limitations. A simple example would be a deep linking from/to external apps and push notifications. People also tend to forget that there are too many layers between the backend and application’s UI where it all can go wrong and there are no frameworks I know of that can give detailed info of where exactly the problem was: 3rd party, backend, network, network implementation in the app, UI, you name it. As a result, projects ends up with unmaintainable tests and disappointment in test automation.

JP Morgan creates first US bank-backed crypto-currency

A laptop screen showing secure payments
Not everyone is convinced that JP Morgan needed to create its own digital currency. A blockchain is designed to be decentralised, so no one party has control over transactions being sent over the network. This is the opposite of the JPM Coin concept. "It doesn't even need a blockchain at all because JP Morgan runs it. They could do it on a website and database they run," David Gerard, author of Attack of the 50 Foot Blockchain: Bitcoin, Blockchain, Ethereum & Smart Contracts, told the BBC. "It isn't like Bitcoin that aren't under anybody's control - it's a centrally controlled thing that sounds vaguely like crypto-currency." JP Morgan says that it is trialling crypto-currency and blockchain in order to speed up payment transfers, as well as reducing clients' counterparty and settlement risk, and decreasing capital requirements. However, Mr Gerard is sceptical and does not believe that the bank needs the technology to speed up transactions.

Multi-Team Backlog Refinement

Multi-team PBR is when all members of all teams refine PBIs together without yet deciding which team will implement which item. Below you can find some benefits that multi-team refinement can give you: Adaptability at the product level. Why? Because all teams understand all PBIs on the Product Backlog, instead of each team understanding only their PBIs, a subset of the Product Backlog. If all teams understand all PBIs then the PO can put any PBIs she seems most valuable at the top without being constraint to the PBIs a particular team understands; Improved self-coordination so the teams maintain a broad understanding of the whole product and the upcoming PBIs, and therefore, are more likely to know of "dependencies" between PBIs; and A transparent measure of progress at the product level for all teams to participate in estimating all PBIs so there is one common velocity at the product level, instead of a distinct velocity per team that needs to be combined into a total.

AI bias: It is the responsibility of humans to ensure fairness

AI bias: It is the responsibility of humans to ensure fairness image
Fortunately, many organisations already recognise that AI bias can occur, and are taking active remedial measures to avoid it. Microsoft’s FATE, for example, aims to address the need for transparency, accountability and fairness in AI and machine learning systems, while IBM’s AI Fairness 360, AIF360 is an open source toolkit of metrics that can be used to check for unwanted bias in datasets and machine learning models, accompanied by algorithms to mitigate such bias. Elsewhere, Google’s has a bias-detecting ‘What If’ tool in the web dashboard for its TensorFlow machine learning framework. Both Microsoft and IBM mention fairness, and the distinction between fairness and discrimination is an important one. Discrimination is the action taken based on bias, whereas fairness is a lack of bias. Amazon’s AI-based recruitment system made a biased prediction which may have been unfair but, as per Amazon’s claim that the tool was not used to inform their decisions, there was no discrimination.

Can Artificial Intelligence take away your job? Probably not

When it comes to bias, an ML model will always operate the way you've trained it, said Olivier Klein, Head of Emerging Technologies, Asia-Pacific at Amazon Web Services (AWS), which is retail giant Amazon's Cloud arm. "If you train a model with a bias, you would end up with a biased model. You continuously need to train and re-train your ML model and the most important thing is that you need some form of feedback from the end-consumers," Klein told IANS. "ML is absolutely not about replacing humans but enhancing the experiences," he added. ... Klein said that humans are really good at learning quickly with very little information. "ML models are the opposite. They require a lot of data inputs to be able to be trained. "I would argue that you show someone a bicycle a few times and you show them how to ride a bicycle and the human being is able to ride that bicycle pretty easily. To just train a robot to ride a bicycle takes millions of hours of training," explained Klein.

The Wall Street Journal is helping SMBs shore up cybersecurity

The Wall Street Journal has created WSJ Pro Cybersecurity, a program specifically designed for small businesses. WSJ Pro Cybersecurity provides information about cybersecurity using a business lens. For $25 plus tax a month, membership includes a daily newsletter, panel discussions, interviews, webinars, and white papers. There is currently a two-week free trial for those who are interested in learning what kind of information is being disseminated. ... Besides the newsletter, WSJ will begin offering the WSJ Pro Cybersecurity Small Business Academy, a two-day conference on cybersecurity and how it relates to smaller companies. "For large companies, cyberattacks can be costly and can jeopardize customer relationships, but for smaller businesses cyberattacks can be a fatal blow," from this WSJ press release. "Smaller businesses can also lack the resources of large companies and are often faced with a confusing array of vendors offering services."

Five emerging cybersecurity threats you should take very seriously in 2019

Firms must work to bridge the gap between communicating the technical aspects of cybersecurity and the business outcomes, such as customer satisfaction, financial health, and reputation, Olyaei said. Keeping track of new threats and not just established ones like ransomware is key for a strong security posture, said Josh Zelonis, senior analyst at Forrester.  "Whenever we develop our strategies for how we're going to protect our organizations, it's really easy to look at things that you're familiar with, or that you have a good understanding of," Zelonis said. "But if you're not looking ahead, you're building for the problems that already exist, and not setting yourself up for long-term success. And that is really the number one reason why you need to be looking ahead -- to understand how attack techniques are evolving." Here are five emerging cybersecurity threats that business, technology, and security leaders need to take seriously this year.

Why hybrid cloud architecture means totally rethinking security

But no matter why or how, from an enterprise or business perspective, it’s significantly detrimental to business, whether it’s harm to the brand from having lost customer data, or actual financial losses or downtime. “The net of it is, we need to think about security at an enterprise level,” says Mike Wronski, principal marketing manager at Nutanix. “So who owns security? Is it the cloud provider, the enterprise, or the security team?” “The generally accepted answer, or the politically correct answer, would be that it’s everyone’s responsibility,” Ashworth says. “I believe that’s true to an extent, but with a major caveat.” Since companies aren’t democratic, but totalitarian in nature, Ashworth believes a top-down approach to security has to be the ideal scenario. Security has to be recognized as intrinsic to the fabric of IT business continuity, rather than an impediment to IT goals. If a strong culture for security exists within a company, you can be assured that security is thought of at all levels, from the end user being able to recognize spam, to good sec ops within the QA process.

Centralizing Availability, Disaster Recovery and Backup for Efficient Business Continuity

From a disaster recovery perspective, the appliance offers up to 20 CPU cores and 768 GB of RAM. Unlike many competing offerings, the appliance can host applications during a server or storage system outage. The client doesn’t need to purchase additional standby hardware with the new Arcserve solution. According to Arcserve, dozens of copies of physical and virtual systems may be spun up directly on the appliance. RAM may be expanded in eight of the 11 new models to be able to run more hosts in the event of a failure. Meanwhile, cloud disaster recovery-as-a-service may be added on for integration with offsite copies, as may high availability in the form of CDP-based failover and failback. In terms of scalability, the system can support up to 504 TB of effective capacity (assuming a 20:1 deduplication ratio). It can manage up to 6 PB of backups.

Quote for the day:

"I don't know what leadership is. You can't touch it. You can't feel it. It's not tangible. But I do know this: you recognize it when you see it." -- Bob Ehrlich