Daily Tech Digest - June 22, 2019

Why AI is here to stay

Image result for robot ai
So here’s why AI is not a fad: in real life, there’s no way I’m giving up my ability to fall back on teaching with examples if I’m not clever enough to come up with the instructions. Absolutely not! I’m pretty sure I use examples more than instructions to communicate with other humans when I stumble around the real world. AI means I can communicate with computers that second way — via examples — not only by instructions, are you seriously asking me to suddenly gag my own mouth? Remember, in the old days we had to rely primarily on instructions only because we couldn’t do it the other way, in part because processing all those examples would strain the meager CPUs of last century’s poor desktops. But now that humanity has unlocked its ability to express itself to machines via examples, why would we suddenly give that option up entirely? A second way of talking to computers is too important to drop like yesterday’s shoulderpads. What we should drop is our expectation that there’s a one-size-fits-all way of communicating with computers about every problem. Say what you mean and say it the way that works best.


Pledges to Not Pay Ransomware Hit Reality

"I don't think you can make a blanket statement of 'pay the ransom' or 'don't pay the ransom,'" says Adam Kujawa, director of the research labs at security firms Malwarebytes. "If you have failed to segment your data or your network, or failed to check your backups or other measures to get your company back on track quickly, then you will have to deal with the fallout." One problem for companies: Ransomware operators have shifted away from blanketing consumers and businesses with opportunistic ransomware attacks and now almost exclusively target business and municipalities. Along with that shift, the cost of ransoms has quickly grown because such organizations can afford to pay. Now, many organizations are faced with seven-digit ransom demands, Zelonis says. "That's a heck of a payday," he adds. The increase in ransom demands is driven by attackers' targeting and research on victims, he says.


End of the line for Internet Explorer 10 might mean updating embedded systems


Microsoft hasn't given specific dates yet; IE11 is coming to the Update Catalog sometime in spring 2019 (which likely means before the end of June), with the other upgrade options coming later in 2019. That means you won't have many months to test and validate IE11 on any systems where you're still using IE10, so you will want to plan your test labs and pilot rings now. Microsoft deliberately didn't put the new Edge browsing engine into IE11 because of enterprise concerns that it might cause compatibility problems. Instead, it still uses the Trident engine and includes document modes that emulate the IE5, IE7, IE8, IE9 and IE10 rendering engines. There are also specific Enterprise Modes to emulate IE8, and IE8 in Compatibility View, but if your sites worked in IE 10 you won't need those. What you will need to change are sites that have the x-ua-compatible meta tag or HTTP header set to 'IE=edge'; in IE10 that means Internet Explorer 10 mode, but in IE11 it means Internet Explorer 11 mode, because it's just asking for the latest IE version. Set it to 'IE=10' if the site has problems.


Expect graph database use cases for the enterprise to take off

As useful as graph databases are for many certain types of queries and analysis, graph tools will present several challenges to CIOs, Moore warned. Data engineers and business experts need to learn new skill sets and create new workflows for defining and refining the graph data models used for these applications. Classical SQL databases were optimized to conserve memory and CPU. They are still the best technology for many kinds of applications such as ERP that involve doing a lot of columnar addition. But joining database tables together to do new kinds of queries can add considerable overhead to SQL databases. As a result, new types of queries can be limited by memory capacity. In contrast, graph databases, as noted, precompute these relationships in a way that speeds analytics and shrinks the size of the data store. In one project, Moore said he managed to shrink a 5 TB SQL database into a 2 TB graph database. A big challenge that must be factored into graph database use cases is their slower performance when writing to the database.


7 Types Of Artificial Intelligence

uncaptioned
Since AI research purports to make machines emulate human-like functioning, the degree to which an AI system can replicate human capabilities is used as the criterion for determining the types of AI. Thus, depending on how a machine compares to humans in terms of versatility and performance, AI can be classified under one, among the multiple types of AI. Under such a system, an AI that can perform more human-like functions with equivalent levels of proficiency will be considered as a more evolved type of AI, while an AI that has limited functionality and performance would be considered a simpler and less evolved type. Based on this criterion, there are two ways in which AI is generally classified. One type is based on classifying AI and AI-enabled machines based on their likeness to the human mind, and their ability to “think” and perhaps even “feel” like humans. According to this system of classification, there are four types of AI or AI-based systems: reactive machines, limited memory machines, theory of mind, and self-aware AI.


The logic of digital change

Disruption may be a yawn, but the fact is that the internet is changing things slowly but surely, and specifically it began when cloud and APIs allowed start-ups to bootstrap and launch on a shoestring. Now, there are 12,000 start-ups globally getting investments that have been doubling down each year – $111.8 billion last year – and so there is something happening. Don’t be complacent. Nothing may have happened in the last quarter century but something will happen in the next, and only the banks that adapt will survive, as Charles Darwin would say. ... there is specifically a fourth revolution of humanity occurring where the people who historically could not be reached by banks are now being reached by technology. The financially illiterate, the folks who aren’t worth it, the financially vulnerable, the unbankable, are all getting to be included because that’s what digital does. In a world where we distribute money physically, you cannot afford to deal with someone in a remote African village; in a world where distribute money digitally, even the guy sitting in a village near the base camp of Mount Everest can trade and transact.


A.I. Ethics Boards Should Be Based on Human Rights


Human rights are imperfect ideals, subject to conflicting interpretations, and embedded in agendas with “outsized expectations.” Though supposedly global, human rights aren’t honored everywhere. Nevertheless, the United Nations Universal Declaration of Human Rights is the best statement ever crafted for establishing all-around social and legal equality and fundamental individual freedoms. The Institute of Electrical and Electronics Engineers rightly notes that human rights are a viable benchmark, even among diverse ethical traditions. “Whether our ethical practices are Western (Aristotelian, Kantian), Eastern (Shinto, Confucian), African (Ubuntu), or from a different tradition, by creating autonomous and intelligent systems that explicitly honor inalienable human rights and the beneficial values of their users, we can prioritize the increase of human well-being as our metric for progress in the algorithmic age.” Technology companies should embrace this standard by explicitly committing to a broadly inclusive and protective interpretation of human rights as the basis for corporate strategy regarding A.I. systems. They should only invite people to their A.I. ethics boards who endorse human rights for everyone.



Accelerating Digital Innovation Inside & Out


Not only are digitally maturing companies more likely to use cross-functional teams, those teams generally function differently in more mature organizations than in less mature organizations. They’re given greater autonomy, and their members are often evaluated as a unit. Participants on these teams are also more likely to say that their cross-functional work is supported by senior management. For more advanced companies, the organizing principle behind cross-functional teams is shifting from projects toward products. Digitally maturing companies are more agile and innovative, but as a result they require greater governance. Organizations need policies that create sturdy guardrails around the increased autonomy their networking strength allows. Digitally maturing companies are more likely to have ethics policies in place to govern digital business. Policies alone, however, are not sufficient. Only 35% of respondents across maturity levels say their company is talking enough about the social and ethical implications of digital business.


Three hacking trends you need to know about to help protect yourself


"The blurred lines between the techniques used by nation-state actors and those used by criminal actors have really gotten a lot fuzzier," says Jen Ayers, vice president of OverWatch cyber intrusion detection and security response at CrowdStrike. "Many criminal organisations are still very loud, but the fact is rather than going the traditional spam email route that they have been before, they are actively intruding onto enterprise networks, they are targeting unsecured web servers and going in, stealing credentials and doing reconnaissance," she adds. This is another tactic which malicious threat actors are beginning to deploy in order to both avoid detection and make attacks more effective – conducting campaigns that don't focus on Windows PCs and other common devices used in the enterprise. With these devices sitting in front of users every single day, and a top priority for antivirus software, there's a higher chance that an attack on these devices will either be prevented by security measures or spotted by users.


Data Strategy: Essential elements to enhance it

Elena Alfaro, head of data and open innovation at the client solutions division of the Spanish bank BBVA, described her organization's work of "spreading the culture of data" and ensuring that the senior leadership of an organization is on board with the data initiatives. "What I've learned is if the person you're sitting with doesn't understand, it is very difficult to get to something big," said Alfaro. For the past two years, Forrester has ranked the BBVA's mobile app the best in the banking business. Forrester's Aurelie L'Hostis credited the bank's app for "striking a superb balance between useful functionality and excellent user experience," a product that Alfaro says grew out of a data strategy with the end user in mind. "Digital banks listen to their customers, they're clever with data, and they work hard on making it easy for customers to manage their financial lives," L'Hostis writes. "It's not a small feat, but that's what your customers are demanding." But regardless of the industry, Wixom argues that companies with a successful data strategy implement a framework that ensures a high level of data integrity and makes sure that it is broadly and easily accessible.



Quote for the day:


"Each day you are leading by example. Whether you realize it or not or whether it's positive or negative, you are influencing those around you." -- Rob Liano


Daily Tech Digest - June 21, 2019

Defining a Test Strategy for Continuous Delivery

Image title
Defining the test cases requires a different mindset than implementing the code. It's better that the test cases are not defined by the same person that implemented the feature. Implementing good automated tests requires serious development skills. This is why, if there are people on the team that are just learning to code (for example testers that are new to test automation), it's a good idea to make sure that the team is giving them the right amount of support to skill up. This should be done through pairing, code review, knowledge sharing sessions. Remember that the entire team owns the codebase. Don't fall into the split ownership trap, in which production code is owned by the devs and test code is owned by the testers. This hinders knowledge sharing, introduces test case duplication and can lead to a drop in test code quality. Developers and testers are not the only ones that care about the quality. Ideally, the Product Owner should define most of the acceptance criteria. She is the one that has the best understanding of the problem domain and its essential complexity. So she should be a major contributor when writing acceptance criteria.



Blockchain expert Alex Tapscott sees coming crypto war as 'cataclysmic'

Digital technology has had a profound impact on virtually every aspect of our lives – except for banking. The institutions we rely on as trusted intermediaries to move, store and manage value, exchange financial assets, enable funding and investment and insure against risk, are more-or-less unchanged since the advent of the internet. This is changing, thanks to blockchain. Libra is only the latest in a wave of revolutionary new innovations that is beginning to disrupt the old model. Bitcoin remains the most consequential and important innovation in at least a generation. It laid the ground work for a new internet of value that promises to do to value industries, like financial services, what the internet did to information industries, like publishing. At first, the impact on banks will be muted. In fact, Facebook will need to rely on some existing banking infrastructure to successfully launch Libra. Over time, however, Libra could cut banks out of many aspects of the industry altogether. I share the same deep belief that Bitcoin will do the same.


The downfall of the virtual assistant (so far)

Virtual Assistant
We've talked plenty about the reasons why everyone and their mother wants you to get friendly with their flavor of robot aid — and why that, in turn, has led to what I call the post-OS era, in which a device's operating system is less important than the virtual assistant threaded throughout it. It's no coincidence that Google is slowly expanding Assistant into a platform of its own, and what we're seeing now is almost certainly just the tip of the iceberg. Something we haven't discussed much, though, is a painful reality that often gets overlooked in all the glowing coverage about this-or-that new virtual assistant gizmo or feature. And for anyone who ever tries to rely on this type of talking technology — be it for on-the-go answers from your phone, on-the-fly device control in your home, or hands-free help in your office — it's a reality that's all too apparent. The truth is, for all of their progress and the many ways in which they can be handy, voice assistants still fail far too frequently to be dependable. And the more Google and other companies push their virtual assistants and expand the areas in which they operate, the more pressing the challenge to correct this problem becomes.


Introduction to Reinforcement Learning


Why are we talking about all this? What does this mean to us, except that we need to have pets if we want to become a famous psychologist? What does this all have to do with artificial intelligence? Well, these topics explore a type of learning in which some subject is interacting with the environment. This is the way we as humans learn as well. When we were babies, we experimented. We performed some actions and got a response from the environment. If the response is positive (reward) we repeated those actions, otherwise (punishment) we stopped doing them. In this article, we will explore reinforcement learning, type of learning which is inspired by this goal-directed learning from interaction. ... Another type of learning is unsupervised learning. In this type of learning, the agent is provided only with input data, and it needs to make some sort of sense out of it. The agent is basically trying to find patterns in otherwise unstructured data. This type of problem is usually used for classification or clusterization types of problems.


Cyberwarfare escalation just took a new and dangerous turn


In the murky world of espionage and cyberwarfare, it's never entirely clear what's going on. Does the US really have the capabilities to install malware in Russian energy systems? If so, why would the intelligence agencies be comfortable (as they seem to be) with the story being reported? Is this an attempt to warn Russia and make its government worry about malware that might not even exist? But beyond the details of this particular story, there are at a number of major concerns here -- particularly around unexpected consequences and the escalation of cyberwarfare risks. It's very hard for a company (or a government) to tell the difference between hackers probing a network as part of general reconnaissance and the early stages of an attack itself. So even probing critical infrastructure networks could raise tensions. There's significant risk in planting malware inside another country's infrastructure with the aim of using it in future. The code can be discovered, which is at the very least embarrassing and, worse, could be seen as a provocation. It could even be reverse-engineered and used against the country that planted it.


Nutanix XI IoT: An Overview For Developers

By distributing the computing part of the problem to the edge, we can execute detection-decision-action logic with limited latency. For example, immediate detection might mean a defective product never leaves the production line, much less makes it to the customer. The consequences of receiving a defective item can range from inconvenient to catastrophic. If it is an article of clothing, the article might require a return. While this may have a range of negative consequences to the business, it does not compare to the consequences of having a defective part installed in an aircraft. Edge computing of data created by IoT edge devices can clearly benefit business, but as we mentioned earlier, as the number and diversity of devices grows, so does the workload for developers attempting to write applications for these devices. Configuring devices, networking devices, managing devices and data streams … these are all tasks that distract developers from the primary task at hand: creating the applications that use IoT data to serve the needs of your business.


Blockchain and AI combined solve problems inherent in each


Best known as the technology that powered bitcoin, blockchain offers an immutable record of every transaction, ensuring that all nodes have the same version of the truth and no records are tampered with. That makes it a relatively fail-safe and hack-proof method for storing and transferring monetary value. But to ensure this safety, the nodes have to go through huge calculations to ensure the validity of the transactions. Blockchain's mechanism for ensuring safety is also its weakness, as it limits scalability. The same is true for blockchain's immutability; every record needs to store the entire history of all transactions. The problems associated with AI are different. AI needs data to operate, but getting good data can be problematic. For instance, hackers can alter the data a machine is trained on with a data poisoning attack. Collecting data from clients is also problematic, especially in light of data privacy laws such as Europe's GDPR. Finally, most of the data needed for effective AI is owned by large organizations, such as Google and Facebook.



In an effort to ensure the UK’s resilience to attacks that exploit vulnerabilities in network-connected cameras, the SCC said the minimum requirements were an important step forward for manufacturers, installers and users alike. The work has been led by Mike Gillespie, cyber security advisor to the SCC and managing director of information security and physical security consultancy Advent IM, along with Buzz Coates, business development manager at CCTV distributor Norbain. The standard was developed in consultation with surveillance camera manufacturers Axis, Bosch, Hanwah, Hikvision and Milestone Systems. Speaking ahead of the official launch, Gillespie said that if a device came out of the box in a secure configuration, there was a good chance it would be installed in a secure configuration. “Encouraging manufacturers to ensure they ship their devices in this secure state is the key objective of these minimum requirements for manufacturers,” he said. Manufacturers benefit, said Gillespie, by being able to demonstrate that they take cyber seriously and that their equipment is designed and built to be resilient.


3 top soft skills needed by today’s data scientists


Data scientists who can understand the business context, plus the technical side of the equation, will be invaluable. This kind of “bilingual” talent can turn data streams into a predictive model, and then translate that model into a working reality, such as for financial forecasting. Core skills in storytelling, problem solving, agile development, and design thinking are critical to interoperating within different business contexts as well. The key is to develop T-shaped skillsets, as opposed to being I-shaped. While I-shaped people have a deep, narrow understanding of one area (like data engineering or data science), T-shaped people have both in-depth knowledge in one area and a breadth of understanding of several others. It is easier for T-shaped people to meld their data expertise to a broad range of use cases and industries. ... The communication side will be especially important as data expertise gets pulled into interdisciplinary use cases. Data scientists will have to be able to talk to people with different backgrounds. This goes back to the need to be more T-shaped to effectively translate highly technical ideas to different business contexts.


Using OpenAPI to Build Smart APIs for Dumb Machines

OpenAPI isn’t the only spec for describing APIs, but it is the one that seems to be gaining prominence. It started life as Swagger and was rebranded OpenAPI with its donation to the OpenAPI initiative. RAML and API Blueprint have their own adherents. Other folks like AWS, Google, and Palantir use their own API specs because they predate those other standards, had different requirements or found even opinionated specs like OpenAPI insufficiently opinionated. I’ll focus on OpenAPI here because its surging popularity has spawned tons of tooling. The act of describing an API in OpenAPI is the first step in the pedagogical process. Yes, documentation for humans to read is one obvious output, but OpenAPI also lets us educate machines about the use of our APIs to simplify things further for human consumers and to operate autonomously. As we put more and more information into OpenAPI, we can start the shift the burden from humans to the machines and tools they use. With so many APIs and so much for software developers to know, we’ve become aggressively lazy by necessity. APIs are a product; reducing friction for developers is a big deal.



Quote for the day:


"Be willing to make decisions. That's the most important quality in a good leader." -- General George S. Patton, Jr.


Daily Tech Digest - June 20, 2019

Researchers say 6G will stream human brain-caliber AI to wireless devices


The most relatable one would enable wireless devices to remotely transfer quantities of computational data comparable to a human brain in real time. As the researchers explain it, “terahertz frequencies will likely be the first wireless spectrum that can provide the real time computations needed for wireless remoting of human cognition.” Put another way, a wireless drone with limited on-board computing could be remotely guided by a server-sized AI as capable as a top human pilot, or a building could be assembled by machinery directed by computers far from the construction site. Some of that might sound familiar, as similar remote control concepts are already in the works for 5G — but with human operators. The key with 6G is that all this computational heavy lifting would be done by human-class artificial intelligence, pushing vast amounts of observational and response data back and forth. By 2036, the researchers note, Moore’s law suggests that a computer with human brain-class computational power will be purchasable by end users for $1,000, the cost of a premium smartphone today; 6G would enable earlier access to this class of computer from anywhere.



Serverless Computing from the Inside Out

Fundamentally, cybersecurity isn't about threats and vulnerabilities. It's about business risk. The interesting thing about business risk is that it sits at the core of the organization. It is the risk that results from company operations — whether that risk be legal, regulatory, competitive, or operational. This is why the outside-in approach to cybersecurity has been less than successful: Risk lives at the core of the organization, but cybersecurity strategy and spending has been dictated by factors outside of the organization with little, if any, business risk context. This is why we see organizations devoting too many resources to defend against threats that really aren't major business risks, and too few to those that are. To break the cycle of outside-in futility, security organizations need to change their approach, so they align with other enterprise risk management functions. And that approach is to turn outside-in on its head, and take an inside-out approach to cybersecurity. Inside-out security is not based on the external threat landscape; it's based on an enterprise risk model that defines and prioritizes the relative business risk presented by organizations' digital operations and initiatives. 


Post-Hadoop Data and Analytics Head to the Cloud

Image: 4x-image - iStockphoto
Gartner analyst Adam Ronthal said that while there are some native Hadoop options available in public clouds like AWS, they may not be the best solution for many applications. "There's a fair bit of complexity that goes into managing a Hadoop cluster," he told InformationWeek. Non-Hadoop-based cloud solutions may look simpler and easier to organizations that are evaluating data and analytics solutions. But that doesn't mean there's not a place for Hadoop in the future. Ronthal said that Hadoop is experiencing a "market correction" rather than an existential crisis. There are use cases that Hadoop is really good at, he said. But a few years back, Hadoop was the rock star technology that was the solution to every problem. "The promises out there 3, 4, or 5 years ago were that Hadoop was going to change the world and redefine how we did data management," he said. "That statement overpromised and underdelivered. What we are really seeing now is recognition of workloads that Hadoop is really good at, like the data science exploration workloads."


Artificial intelligence could revolutionize medical care. But don’t trust it to read your x-ray just yet


The algorithms learn as scientists feed them hundreds or thousands of images—of mammograms, for example—training the technology to recognize patterns faster and more accurately than a human could. “If I’m doing an MRI of a moving heart, I can have the computer predict where the heart’s going to be in the next fraction of a second and get a better picture instead of a blurry” one, says Krishna Kandarpa, a cardiovascular and interventional radiologist at the National Institute of Biomedical Imaging and Bioengineering in Bethesda, Maryland. Or AI might analyze computerized tomography heads scans of suspected strokes, label those more likely to harbor a brain bleed, and put them on top of the pile for the radiologist to examine. An algorithm could help spot breast tumors in mammograms that a radiologist’s eyes risk missing. But Eric Oermann, a neurosurgeon at Mount Sinai Hospital in New York City, has explored one downside of the algorithms: The signals they recognize can have less to do with disease than with other patient characteristics, the brand of MRI machine, or even how a scanner is angled.


Cybersecurity Risk Assessment – Made Easy 

Cyber Risk Assessment
Cybersecurity Risk Assessment is critical because cyber risks are part and parcel of any technology-oriented business. Factors such as lax cybersecurity policies and technological solutions that have vulnerabilities expose an organization to security risks. Failing to manage such risks provides cybercriminals with opportunities for launching massive cyberattacks. But fortunately a cybersecurity risk assessment allows a business to detect existing risks. A cybersecurity risk assessment also facilitates risk analysis and evaluation to identify vulnerabilities with higher damage potential. As a result, a business can identify suitable controls for addressing the risks. ... Cybersecurity risk assessments have many other benefits, all aimed at bolstering organizational security. Cybersecurity risk assessments are critical for any company to harden its cybersecurity. Most importantly, they are the method for a company to identify the most suitable security controls needed to achieve an optimum cybersecurity approach.


Cybersecurity Accountability Spread Thin in the C-Suite

"CEOs are no longer looking at cyber-risk as a separate topic. More and more they have it embedded into their overall change programs and are beginning to make strategic decisions with cyber-risk in mind," says Tony Buffomante, global co-leader of cybersecurity services at KPMG. "It is no longer viewed as a standalone solution."  That sounds good at the surface level, but other recently surfaced statistics offer grounding counterbalance. A global survey of C-suite executives released last week by Nominet indicates these top executives have some serious gaps in knowledge about cybersecurity, with around 71% admitting they don't know enough about the main threats their organizations face. This corroborates with a survey of CISOs conducted earlier this year by the firm that indicates security knowledge and expertise possessed by the board and C-levels is still dangerously low. Approximately 70% of security executives agree that at least one cybersecurity specialist should be on the board in order for it to take appropriate levels of due diligence in considering the issues.


Industrial IoT as Practical Digital Transformation

Industrial IoT as Practical Digital Transformation
To navigate this journey in the face of both uncertainty and hype, their company leaders chose a measured approach of “practical” digital transformation. To begin, they adopted IoT through an iterative process of incremental value testing. Notably, they selected goals for increasing internal effectiveness rather than fixating on new customer offerings. As a result, usage data from equipment inside customer facilities now empowers a more cost-effective services team and reduces truck rolls. Furthermore, understanding how their machines are operated in the field enables product teams to proactively identify problem areas and continuously improve their equipment offerings. Both use cases are internal rather than directly customer-facing. Yet it’s their customers who ultimately benefit from higher operational productivity enabled by these ever-smarter systems. Moving forward, machine utilization numbers will better prepare sales teams for guiding customers toward systems best matching their true capacity needs, as well as inform warranty management issues. Connected systems create opportunities for exceeding customer expectations at every turn.


Why the Cloud Data Diaspora Forces Businesses to Rethink their Analytics Strategies

The single biggest thing is it allows you to scale and manage workloads at a much finer grain of detail through auto-scaling capabilities provided by orchestration environments such as Kubernetes. More importantly it allows you to manage your costs. One of the biggest advantages of a microservice-based architecture is that you can scale up and scale down to a much finer grain. For most on-premises, server-based, monolith architectures, customers have to buy infrastructure for peak levels of workload. We can scale up and scale down those workloads -- basically on the fly -- and give them a lot more control over their infrastructure budget. It allows them to meet the needs of their customers when they need it. ... A lot of Internet of Things (IoT) implementations are geared toward collecting data at the sensor, transferring it to a central location to be processed, and then analyzing it all there. What we want to do is push the analytics problem out to the edge so that the analytic data feeds can be processed at the edge. 


Brexit GDPR and the flow of data: there could be one winner and that’s the cyber criminal

Brexit GDPR and the flow of data: there could be one winner and that̢۪s the cyber criminal image
Huseyin advocates technology. It may not come as a shock to learn he advocated a product from nsKnox. He refers to Cooperative Cyber Security which allows data to be shared across organisations and networks in a way that is completely cryptographic and shredded. “If you can take information with identifiers and put it into a form which is actually meaningless and shred it cryptographically and then distribute it to the partners of the data consortium who want to be able to access that information, you’re now pushing data around the world potentially without ever exposing the actual underlying information. “So for example, we could take your name and we can shred it and we can distribute it to let’s say two banks in Europe and two banks in the UK. Each of those banks holds a piece of information and collectively that information makes up your name, but individually those pieces of information are just bits of encrypted binary data. So totally meaningless.“ So that’s two potential solutions, get close to the regulator and apply appropriate technology.


How to Use Open Source Prometheus to Monitor Applications at Scale

Using Prometheus, we looked to monitor “generic” application metrics, including the throughput (TPS) and response times of the Kafka load generator (Kafka producer), Kafka consumer, and Cassandra client (which detects anomalies). Additionally, we wanted to monitor some application-specific metrics, including the number of rows returned for each Cassandra read, and the number of anomalies detected. We also needed to monitor hardware metrics such as CPU for each of the AWS EC2 instances the application runs on, and to centralize monitoring by adding Kafka and Cassandra metrics there as well.  To accomplish this, we began by creating a simple test pipeline with three methods (producer, consumer, and detector). We then used a counter metric named “prometheusTest_requests_total” measured how many times each stage of the pipeline executes successfully, and a label called “stage” to tell the different stage counts apart (using “total” for the total pipeline count). We then used a second counter named “prometheusTest_anomalies_total” to count detected anomalies.



Quote for the day:


"Good things come to people who wait, but better things come to those who go out and get them." -- Anonymous


Daily Tech Digest - June 19, 2019

RPA use cases that take RPA to next level


"Start with a small piece of a larger process and take on more," he said. "Then, look upstream and downstream, and ask, 'How do we take that small use case and expand the scope of what the bot is automating? Can we take on more steps in the process, or can we initiate the automation earlier in the process to grow?'" At the same time, Abel said CIOs should create an RPA center of excellence and develop the tech talent needed to take on bigger RPA use cases. He agreed that a strong RPA governance program to ensure the bots are monitored and address any change control procedures is crucial. It's also essential to maintain a strong data governance program, he said, as the bots need good data to operate accurately. Additionally, Abel said he advises CIOs to work with other enterprise executives to develop RPA use cases that align to business objectives so that RPA deployments have long-term value. Abel pointed to one client's experience as a cautionary tale. He said that company jumped right into RPA, deploying bots to automate various tasks. 


Libra Association members Facebook blockchain
The underlying blockchain transactional network will be able to handle thousands of transactions per second; data about the financial transactions will be kept separate from data about the social network, according to David Marcus, the former president of PayPal. He is now leading Facebook's new digital wallet division, Calibra. Aside from limited cases, Calibra will not share account information or financial data with Facebook or any third party without customer consent, the social network said in a statement. "This means Calibra customers' account information and financial data will not be used to improve ad targeting on the Facebook family of products," Facebook said. Calibra and its underlying blockchain distributed ledger will scale to meet the demands of "billions," Marcus said in an interview with Fox Business News this morning. Libra is different from other cryptocurrencies, such as bitcoin, in that it is backed by fiat currency, so its value is not simply determined by supply and demand. Bitcoin is "not a good medium of exchange today because [fiat] currency is actually very stable and bitcoin is volatile," Marcus said in the Fox Business News interview.


Western Digital launches open-source zettabyte storage initiative

big data / data center / server racks / storage / binary code / analytics
With this project Western Digital is targeting cloud and hyperscale providers and anyone building a large data center who has to manage a large amount of data, according to Eddie Ramirez, senior director of product marketing for Western Digital. Western Digital is changing how data is written and stored from the traditional random 4K block writes to large blocks of sequential data, like Big Data workloads and video streams, which are rapidly growing in size and use in the digital age. “We are now looking at a one-size-fits-all architecture that leaves a lot of TCO [total cost of ownership] benefits on the table if you design for a single architecture,” Ramirez said. “We are looking at workloads that don’t rely on small block randomization of data but large block sequential write in nature.” Because drives use 4k write blocks, that leads to over-provisioning of storage, especially around SSDs. This is true of consumer and enterprise SSDs alike. My 1TB SSD drive has only 930GB available. And that loss scales. An 8TB SSD has only 6.4TB available, according to Ramirez.



'Extreme But Plausible' Cyberthreats

A new report from Accenture highlights five key areas where cyberthreats in the financial services sector will evolve. Many of these threats could comingle, making them even more disruptive, says Valerie Abend, a managing director at Accenture who's one of the authors of the report. The report, "Future Cyber Theats: Extreme But Plausible Threat Scenarios in Financial Services," focuses on credential and identity theft; data theft and manipulation; destructive and disruptive malware; cyberattackers' use of emerging technologies, such as blockchain, cryptocurrency and artificial intelligence; and disinformation campaigns. In an interview with Information Security Media Group, Abend offers an example of how attackers could comingle threats. If attackers were to wage "a multistaged attack using credential theft against multiple parties that then used disruptive or destructive malware, so that they actually change the information at key points in the business process of critical financial functions ... and then used misinformation outside of that entity using various parts of social media ... they could really do some serious damage," Abend says.


How to prepare for and navigate a technology disaster

DRP, Disaster Recovery Plan
Two key developments will have the largest impact on business continuity and disaster recovery planning. The first is serverless architecture. Using this term very loosely, the adoption of these capabilities will dramatically increase application and data portability and enable workloads to be executed virtually anywhere. We're quite a bit of a way from this being the default way you build applications, but it's coming, and it's coming fast. The second is edge computing. As modern applications and business intelligence are moved to the edge, the ability to 'fail over' to additional resources will increase, minimizing (if not eliminating) real and perceived downtime. The more identical places you can run your application, the better the level of availability and performance is going to be. This definitely isn't simple, but we're seeing (and developing) applications each and every day that are built with this architecture in mind, and it's game changing for enterprise and application architecture and planning.


Q&A on the Book Risk-First Software Development

The Risk Landscape is the idea that whenever we do something to deal with one risk, what’s actually happening is that we’re going to pick up other risks as a result. For example, hiring new developers into a team might mean you can clear more Feature Risks (by building features the customers need), but it also means you’re going to pick up Coordination and Agency Risk, because of your bigger team. So, you’re moving about on a Risk Landscape, hoping to find a nice position where the risks are better for you. This first volume of Risk-First Software Development was all about that landscape, and the types of risks you’ll find on it. I am planning a second volume, which again will all be available to read on riskfirst.org. This will focus more on the tools and techniques you can use to navigate the Risk Landscape.  For example, if I have a distributed team, I might face a lot of Coordination Risk, where work is duplicated, or people step on each other’s toes. What are the techniques I can use to address that? I could introduce a chat tool like Slack, but it might end up wasting developer time and causing more Schedule Risk. 


Microservices Chassis Pattern
This is not something new. Reusability is something we learn in at the very beginning of our developer lives. This pattern cuts down on the redundancy factor and complexity across services by abstracting the common logic to a separate layer. If you have a very generic chassis, it could even be used across platforms or organizations and wouldn't need to be limited to a specific project. It depends on how you write it and what piece of logic you move to this framework. Chassis are a part your microservices infrastructure layer. You can move all sorts of connectivity, configuration, and monitoring to a base framework. ... When you start writing a new service by identifying a domain (DDD) or by identifying the functionality, you might end up writing lots of common code. As you progress and create more and more services, it could result in code duplication, or even chaos, to manage such common concerns and redundant functionalities. Moving such logic to a common place and reusing it across different services would improve the overall lifecycle of your service. You might spend some initial effort in creating this component but it will make your life easier later on.


Remember data is used for dealing with your customers, making decisions, generating reports, understanding revenue and expenditures. Everyone from the customer service team to your senior executive team use data and rely on it being good enough to use. Data governance provides the foundation so that everything else can work. This will include obvious “data” activities like master data management, business intelligence, big data analytics, machine learning and artificial intelligence. But don’t get stuck thinking only in terms of data. Lots of processes in your organization can go wrong if the data is wrong, leading to customer complaints, damaged stock, and halted production lines. Don’t limit your thinking to only data activities. If your organization is using data (and to be honest, which companies aren’t?) you need data governance. Some people may not believe that data governance is sexy, but it is important for everyone. It need not be an overly complex burden that adds controls and obstacles to getting things done. Data governance should be a practical thing, designed to proactively manage the data that is important to your organization.



A well-managed cloud storage service ties directly into the apps you use to create and edit business documents, unlocking a host of collaboration scenarios for employees in your organization and giving you robust version tracking as a side benefit. Any member of your organization can, for example, create a document (or a spreadsheet or presentation) using their office PC, and then review comments and changes from co-workers using a phone or tablet. A cloud-based file storage service also allows you to share files securely, using custom links or email, and it gives you as administrator the power to prevent people in your organization from sharing your company's secrets without permission. With the assistance of sync clients for every major desktop and mobile platform, employees have access to key work files anytime, anywhere, on any device. You might already have access to full-strength cloud collaboration features without even knowing it. If you use Microsoft's Office 365 or Google's G Suite, cloud storage isn't a separate product, it's a feature.


Boost QA velocity with incremental integration testing

There are several strategies for incremental integration testing, including bottom-up, top-down and a hybrid approach blending elements of both, as well as automation. Each method has benefits and limitations, and gets incorporated into an overall test strategy in different ways. These incremental approaches help enable shift-left testing, which means automation shapes how teams can perform the practice. ... Often, the best approach is hybrid, or sandwich, integration testing, which combines both top-down and bottom-up techniques. Hybrid integration testing exploits bottom-up and top-down during the same test cycles. Testers use both drivers and stubs in this scenario. The hybrid approach is multilayered, testing at least three levels of code at the same time. Hybrid integration testing offers the advantages of both approaches, all in support of shift left. Some of the disadvantages remain, especially as the test team must work on both drivers and stubs.



Quote for the day:


"What you do makes a difference, and you have to decide what kind of difference you want to make." -- Jane Goodall


Daily Tech Digest - June 18, 2019

7 Promises of Machine Learning in Finance


Machine Learning is the new black, or the new oil, or the new gold! Whatever you compare Machine Learning to, it’s probably true from a conceptual value perspective. But what about its relation to finance, what situation we are having today? Banks keep everything: a history of transactions, conversations with clients, internal information and more… Storages are literally bloated to tera-, and somewhere to a petabyte. Now, then, Big Data can solve this problem and process huge amounts of information: the greater the amount, the greater the detectable needs and behaviors of the client. Artificial Intelligence paired with Machine Learning allows the software learning the clients’ behavior and make autonomous decisions. ... It goes without saying, Machine learning is just remarkably good for finance and the promises of this technology including Big Data and Artificial Intelligence is super high. As you can see, there are lots of options, approaches, and applications for improving: choosing an optimal location for a bank, finding the best solutions for a customer, turning algorithmic trading into intelligent trading, managing risk and preventing fraud ...


What Makes A Software Programmer a Professional?

Professional software developers understand that they generally have more knowledge of software development than the customers that have hired them to write code. Thus they understand that writing secure code, code that can’t be easily abused by hackers, is their responsibility. A software developer creating web applications probably needs to address more security risks than a developer writing embedded drivers for an IOT device, but each needs to assess the different ways the software is vulnerable to abuse and take steps to eliminate or mitigate those risks. Although it may be impossible to guarantee that any software is immune to an attack, professional developers will take the time to learn and understand the vulnerabilities that could exist in their software, and then take the subsequent steps to reduce the risk that their software is vulnerable. Protecting your software from security risks usually includes both static analysis tools and processes to reduce the introduction of security errors, but it primarily relies upon educating those writing the code.


When serverless is a bad idea
Indeed, many refer to serverless as “no-ops,” but it’s really “reduced-ops,” or as my friend Mike Kavis likes to say, “some-ops.” Clearly the objective is to increase simplicity and make building and deploying net-new cloud-based serverless applications much more productive and agile. But serverless is not always a good idea. Indeed, it seems to be a forced fit a good deal of the time, causing more error than trial. Serverless is an especially bad idea when it comes to stateful applications. A stateless application means that every transaction is performed as if it were being done for the very first time. There is no previously stored information used for the current transaction. In contrast, a stateful application saves client data from the activities of one session for use in another. The data that is saved is often called the application’s state. Stateful applications are a bad fit for serverless.  Why? Serverless applications are made up of sets of services (such as functions) that are short running and stateless. Consider them transactions in the traditional sense, in that they are invoked and they don’t maintain anything after being executed.



Two Weekend Outages, Neither a Cyberattack

The power outages in Argentina occurred just prior to local elections being held in many parts of the country. Suspicious, right? But not as suspicious as what experts say is the country's aging power infrastructure. Indeed, Argentina's Clarín newspaper reports that the Argentinian government has blamed the "gigantic" outage on the country's electric power interconnection systems collapsing due to coastal storms. Officials also noted that the rolling blackouts had taken out not just Argentina, but large parts of Uruguay. Apparently, that's also what took out parts of Paraguay and Chile. ... Target suffered twin outages, neither of which trace to a hack attack. On Saturday, customers were unable to buy any goods in stores or online as a result of an outage caused by what Target blamed on "an internal technology issue" that lasted about two hours. "Our technology team worked quickly to identify and fix the issue, and we apologize for the inconvenience and frustration this caused for our guests," a Target spokesman said. 


AI storage: Machine learning, deep learning and storage needs


The storage and I/O requirements of AI are not the same throughout its lifecycle. Conventional AI systems need training, and during that phase they will be more I/O-intensive, which is where they can make use of flash and NVMe. The “inference” stage will rely more on compute resources, however. Deep learning systems, with their ability to retrain themselves as they work, need constant access to data. “When some organisations talk about storage for machine learning/deep learning, they often just mean the training of models, which requires very high bandwidth to keep the GPUs busy,” says Doug O'Flaherty, a director at IBM Storage. “However, the real productivity gains for a data science team are in managing the entire AI data pipeline from ingest to inference.” The outputs of an AI program, for their part, are often small enough that they are no issue for modern enterprise IT systems. This suggests that AI systems need tiers of storage and, in that respect, they are not dissimilar to traditional business analytics or even enterprise resource planning (ERP) and database systems.



Can Your Patching Strategy Keep Up with the Demands of Open Source?

An alarming number of companies aren't applying patches in a timely fashion (for both proprietary and open source software), opening themselves to risk. The reasons for not patching are varied: Some organizations are overwhelmed by the endless stream of available patches and are unable to prioritize what needs to be patched, some lack the trained resources to apply patches, and some need to balance risk with the financial costs of addressing that risk. Unpatched software vulnerabilities are one of the biggest cyberthreats that organizations face, and unpatched open source components in software add to security risk. The 2019 OSSRA report notes that 60% of the codebases audited in 2018 contained at least one open source vulnerability. In 2018, the NVD added over 16,500 new vulnerabilities. This represents a rate of over 45 new disclosures daily, or a pace most organizations are ill equipped to handle. Given open source components are consumed both in source form as well as from commercial applications, a comprehensive open source governance strategy should encompass both source code usage as well as the governance practices for any software or service provider.


3 rules for succeeding with AI and IoT at the edge


The primary value of combining AI, IoT and edge computing is their ability to generate fast, appropriate responses to events signaled by IoT sensors. Virtual and augmented reality applications demand this kind of response, as do enterprise applications in process control and the movement of goods. The cooperation inherent in manufacturing, warehousing, sales and delivery will likely create the sweet spot for an IoT-enabled AI edge. Such activities form a chain of movement of goods that cross many different companies and demand coordination that a single-company IoT model could not provide. ... Think event-flows, not workflows in your application planning. Most enterprise development practices were weaned on transaction processing, and transactions are multistep, contextual, update-centric forms of work. Their pace of generation can be predicted fairly well, and when a transaction is initiated, the flow of information it triggers is usually predictable. Events are simply signals of conditions or changes in conditions.


Building a cyber-physical immune system

To build a credible model of its own behaviour, the system must not just learn its digital behaviour, but also capture the behaviour of its physical subsystems. One way to achieve this is to represent the behaviour in terms of physical laws. For example, moving parts of a system will obey the laws of motion; parts of a heating subsystem will obey the laws of thermodynamics; and electrical installations will obey current and voltage laws. In theory, it is possible to sense relevant physical quantities, apply the correct physical laws and then detect departures from expected behaviour. These deviations suggest that the system might be functioning abnormally, because of to its own wear and tear, spontaneous failure, or concerted malicious activity. Anomaly detection, in principle, operates in this manner, but it has been applied rather narrowly to specific subsystems. ... to build a cyber-physical immune system, it is necessary to engage with experts who work on its non-cyber aspects.


Many businesses are investing in microservices, for example, to enable faster, more efficient application development. But whereas in traditional models, applications are deployed to application servers, in a microservices-based architecture, servers are deployed to the application. One consequence is that tasks previously handled by the application server—such as authentication, authorization, and session management—are shifted to each microservice. If a business has thousands of such microservices powering their applications across multiple clouds, how can its IT leaders even begin to think of a perimeter? ... Historically, many enterprises applied management and security to only a subset of APIs—e.g., those shared with internal partners and hosted behind the corporate firewall (within a walled garden, for example). But because network perimeters no longer contain all the experiences that drive business, enterprises should think of each API as a possible point of business leverage and a possible point of vulnerability. To adapt to today’s application development demands and threat environment, in other words, APIs should be managed and secured, regardless of where they are located.


Love It or Hate It, Java Continues to Evolve

What’s really important is that Java is continuing to evolve. With the new six-month release cadence of the OpenJDK, it might seem like the pace of change has slowed but, if anything, the reverse is true. We are seeing a constant stream of new features, many of them quite small, yet making developers lives much easier. To add big new features to Java takes time because it’s essential to get these things right. We will see in JDK 13 a change to the switch expression, which was introduced as a preview feature in JDK 12. Rather than setting the syntax in stone (via the Java SE specification), preview features allow developers to try a feature and provide feedback before it is finalized. That’s precisely what happened in this case. The longer-term Project Amber will continue to make well-reasoned changes to the language syntax to smooth some of the rough-edges that developers find trying at times. You can expect to see more parts of Amber delivered over the next few releases.



Quote for the day:


"You must expect great things of yourself before you can do them." -- Michael Jordan


Daily Tech Digest - June 17, 2019

300+ Terrifying Cybercrime and Cybersecurity Statistics & Trends


With global cybercrime damages predicted to cost up to $6 trillion annually by 2021, not getting caught in the landslide is a matter of taking in the right information and acting on it quickly. We collected and organized over 100 up-to-date cybercrime statistics that highlight: The magnitude of cybercrime operations and impact; The attack tactics bad actors used most frequently in the past year
How user behavior is changing and how it… isn’t; What cybersecurity professionals are doing to counteract these threats; How different countries fare in terms of fighting off blackhat hackers and other nation states; and  What can be done to keep data and assets safe from scams and attacks. Dig into these surprising (and sometimes mind-boggling) internet security statistics to understand what’s going on globally and discover how several countries fare in protecting themselves. The article includes a handy infographic you can browse to see how each stat is connected to the others, and plenty of visual representations of the most important facts and figures in information security today.


How Blockchain And AI Can Help Master Data Management

uncaptioned
Ensuring data security is vital, not only for ethical purposes but also for compliance with regulatory bodies. And no conversation about security and privacy, in this day and age, is complete without the mention of blockchain. Blockchain, which is often considered to be synonymous with privacy, can be used to secure sensitive information that makes up master data. This includes any personal information, such as that pertaining to customers and employees. It can also refer to accounting and banking-related information that may be necessary for processes like procurement and sales. All such information can be secured using blockchain through cryptographic hashing. Businesses can internally build enterprise blockchain networks to secure and manage master data using a decentralized model. It not only secures the information from illicit modification, but also from accidental loss due to physical damage to centralized servers. Additionally, it also helps in compliance with privacy regulations in an easily demonstrable manner. This is because data on a blockchain, in addition to being immutable, is also transparent and visible to all participants, ensuring smoother audits and checks.


Developing a Functional Data Governance Framework


Harvard Business Review reports 92 percent of executives say their Big Data and AI investments are accelerating, and 88 percent talk about a greater urgency to invest in Big Data and AI. In order for AI and machine learning to be successful, Data Governance must also be a success. Data Governance remains elusive to the 87 percent of businesses which, according to Gartner, have lower levels of Business Intelligence. Recent news has also suggested a need to improve Data Governance processes. Data breaches continue to affect customers and the impacts are quite broad, as an organization’s customers (including banks, universities, and pharmaceutical companies) must continually take stock and change their user names and passwords. Effective Data Governance is a fundamental component of data security processes. Data Governance has to drive improvements in business outcomes. “Implementing Data Governance poorly, with little connection or impact on business operations will just waste resources,” says Anthony Algmin, Principal at Algmin Data Leadership. To mature, Data Governance needs to be business-led and a continuous process, as Donna Burbank and Nigel Turner emphasize.


Survey: Data-center staffing shortage remains challenging

help wanted data center network room it shortage now hiring by yinyang getty
Contributing to the staffing crisis is a lack of workplace diversity. In particular, the Uptime Institute’s research highlights a significant gender imbalance: 25 percent of managers surveyed have no women among their design, build or operations staff, and another 54 percent have 10 percent or fewer women on staff. Only 5 percent of respondents said women represent 50 percent or more of staff. Yet most respondents don’t seem to think there’s anything deterring women from working where they work. A majority (85 percent) said it’s easy for women to pursue a career in their respective organization’s data center team or department; just 15 percent said it’s difficult. Referring to the data-center industry as a whole, respondents were less confident about women’s employment prospects: 53 percent said it’s easy for women to pursue a career in data centers, and 47 percent said it’s difficult. In the big picture, diversity issues could become a threat to business operations. “Study after study shows that a lack of diversity is not just a pipeline issue,” Ascierto said. 


How banks can use ecosystems to win in the SME market

How banks can use ecosystems to win in the SME market
In parallel to designing the prototype, banks need to think through IT implications at the outset. The design choices will significantly affect the speed of development and the potential reach of the new solution. A design based on integration with an existing banking app might command a larger audience than a new stand-alone application—yet the latter typically offers more flexibility. The choice of a platform should be wedded to the monetization approach (see the “Think early about monetization” section). If the bank wants to retain the option of spinning off an ecosystem platform in the future, or listing it separately, its IT should not be enmeshed with the bank’s legacy systems. Nor can it be completely divorced: efficient transfer of information between the two systems is needed to maximize value for both banking and nonbanking offerings. IT is a key driver of costs and of the ecosystem design and business model. For instance, a Western European bank decided to integrate its ecosystem solution with its mobile banking platform.


The New Addition to the Dell EMC Ready Solutions for AI Portfolio


The Deep Learning with Intel solution joins the growing portfolio of Dell EMC Ready Solutions for AI and was unveiled today at International Super Computing in Frankfurt. This integrated hardware and software solution is powered by Dell EMC PowerEdge servers, Dell EMC PowerSwitch networking, and scale-out Isilon NAS storage and leverages the newest AI capabilities of Intel’s 2nd Generation Intel® Xeon® Scalable processor microarchitecture, Nauta open source software and includes enterprise support. The solution empowers organizations to deliver on the combined needs of their data science and IT teams and leverages deep learning to fuel their competitiveness. Dell Technologies Consulting Services help customers implement and operationalize Ready Solution technologies and AI libraries, and scale their data engineering and data science capabilities. Once deployed, ProSupport experts provide comprehensive hardware and collaborative software support to help ensure optimal system performance and minimize downtime. 


5G in the UK — overhyped or has the next era of connectivity really begun?

5G in the UK — overhyped or has the next era of connectivity really begun? image
The availability of 5G is dependent on local operators (EE, O2, Vodafone etcetera) — businesses are relying on them to build it out and drive on the capabilities. These businesses will need connectivity across multiple networks and so while an operator race is developing, it’s important that every network is competitive. Unfortunately, this is not a priority for heated competitors. Over the last two months, operators have revealed their capabilities, but they’re very focused on their own networks. It’s unlikely, they will have even started to talk about how to make that available to other partners or asked how to support sharing across partners within the network, which, as is the case with other technology changes, typically has a second phase. “Typically, the first phase is to build out and scale up within their own networks; and then the next phase is asking how do you do interoperability and interworking between the networks,” said Sherwood. “That’s even further away from scale. ... ”


Could AI Enable The Idea Of 'Reverse Fact Checking'?

Getty Images.
Fact checking today is a reactive process in which journalists wait for a falsehood to begin spreading virally and then publish their final verdict long after the falsehood’s spread has tapered off and the damage done. Much of this delay stems from the amount of time and research it takes for fact checkers to investigate a claim and determine its veracity. What if we inverted this process and required every social media post to provide external attribution for its claims and used deep learning algorithms to compare the statements in the post to the original material it cites as its source? Could this “reverse fact checking” largely curb the spread of digital falsehoods? The greatest limitation of today’s fact checking landscape is the time and effort it takes fact checkers to investigate a claim. Collecting evidence, reaching out to organizations and experts for commentary and summarizing the resulting information into a final verdict is an extremely time-consuming process that offers few opportunities for efficient scaling.


Identity and access management –– mitigating password-related cyber security risks

Identity and access management –– mitigating password-related cyber security risks image
The death of the password has been heralded since the Hewlett Packard, in the mid 1990s, introduced biometric fingerprint scanning into laptops. But, it is still pervasive. Biometrics have become more common in personal devices and mobile devices, it’s true. But, there are still a range of applications out there that are hugely dependent on passwords as their primary method of authentication. In any enterprise or small business, there’s still a heavy reliance on passwords and often businesses don’t even know the extent to which applications are being used in the business. IT might know about the common apps that are used in that organisation, but they may have no visibility of these applications that some departments have adopted autonomously. To give an example, My1Login worked with one smaller organisation who thought it had about 40 applications in use across the business. When they switched My1Login’s solution on, the technology discovered there were actually 600 corporate applications being used. All of these are now integrated fully with a single sign-on.


Five Android and iOS UI Design Guidelines for React Native


In a multiplatform approach, the designer is bound by the guidelines for each platform. This approach is more useful if your application has a complex UI and your main goal is to attract users who are more likely to spend their time on their favorite platform, be it iOS or Android. Going by the above example of a search bar, an Android user is more likely to be comfortable with the look and feel of the standard search bar of an Android app. This contrasts with an iPhone user, who will not be comfortable with the standard Android search bar. So, in a multiplatform approach, you strive to give each user the kind of look and feel they are used to. Let’s have a look at a more realistic example in order to have a clearer picture of what the multi-platform approach entails: Airbnb. As you can see in the image below, the versions of the Airbnb app for iOS and Android look entirely different and the reason for that is they follow design guidelines which are totally platform-specific.



Quote for the day:



"Blessed are the people whose leaders can look destiny in the eye without flinching but also without attempting to play God "- Henry Kissinger