Daily Tech Digest - May 21, 2018

riello-industrial-iot-main
Smaller micro datacentres utilising a modular approach are making it possible for data processing facilities to be based either on-site or as close to the location as possible. This edge computing is essential as it gives manufacturers the capabilities to run real-time analytics, rather than vast volumes of data needing to be shipped all the way to the cloud and back for processing. Modular datacentres give operators the scope to ‘pay as you grow’ as and when the time comes for expansion. The rise of modular UPS provides similar benefits in terms of power protection requirements too. And with all the additional revenues a datacentre could make from Industry 4.0, the need for a reliable and robust continuous supply of electricity becomes even more imperative. Transformerless modular UPSs deliver higher power density in less space, run far more efficiently at all power loads so waste less energy, and also don’t need as much energy-intensive air conditioning to keep them cool. Any data centre manager planning to take advantage of manufacturers’ growing data demands would be wise to review their current power protection capabilities.



Angular Application Generator - an Architecture Overview


Most of the time, tools like Angular CLI and Yeoman are helpful if we decide to simplify the source generation by following a very strict standard and pragmatic way to develop an application. In the case of Angular CLI, it can be useful when doing scaffolding. Simultaneously, the Yeoman generator delivers very interesting generators for Angular and brings a great deal of flexibility because you can write your own generator in order to fill your needs. Both Yeoman and Angular CLI enrich the software development through their ecosystems: delivering the bulk of defined templates to build up the initial software skeleton. On the other hand, it could be hard to only use templating. Sometimes standardized rules can be translated into a couple of templates, which would be useful in many scenarios. But that wouldn’t be the case when trying to automate very different forms with many variations, layouts and fields, in which countless combinations of templates could be produced. It would bring only headache and long-term issues, because it reduces the maintainability and incurs technical debt.


Asigra evolves backup/recovery to address security, compliance needs

cloud backup, cloud storage, Asigra
Asigra addresses the Attack-Loop problem by embedding multiple malware detection engines into its backup stream as well as the recovery stream. As the backups happen, these engines are looking for embedded code and use other techniques to catch the malware, quarantine it, and notify the customer to make sure malware isn’t unwittingly being carried over to the backup repository. On the flip side, if the malware did get into the backup repositories at some point in the past, the malware engines conduct an inspection as the data is being restored to prevent re-infection. Asigra also has added the ability for customers to change their backup repository name so that it’s a moving target for viruses that would seek it out to delete the data. In addition, Asigra has implemented multi-factor authentication in order to delete data. An administrator must first authenticate himself to the system to delete data, and even then the data goes into a temporary environment that is time-delayed for the actual permanent deletion. This helps to assure that malware can’t immediately delete the data. These new capabilities make it more difficult for the bad guys to render the data protection solution useless and make it more likely that a customer can recover from an attack and not have to pay the ransom.


Moving away from analogue health in a digital world


Technology has an important role to play in the delivery of world-class healthcare. Secure information about a patient should flow through healthcare systems seamlessly. The quality, cost and availability of healthcare services depends on timely access to secure and accurate information by authorised caregivers. Interoperability cannot be solved by any one organisation in isolation. What is needed is for providers, innovators, payers, governing bodies and standards development organisations to come together to apply innovative and agile solutions to the problems that healthcare presents. We can clearly see the pressure that healthcare organisations are under at the moment. Researchers at the universities of Cambridge, Bristol, and Utah found that a staggering 14 million people in England now have two or more long-term conditions – putting a major strain on the UK’s healthcare services. To add to this, recent figures show that more than half of healthcare professionals believe the NHS’s IT systems are not fit for purpose.


Shadow IT is a Good Thing for IT Organizations

The move to shadow IT is a good thing for IT. Why? It is a wake-up call. It provides a clear message that IT is not meeting the requirements of the business. IT leaders need to rethink how to transform the IT organization to better serve the business and get ahead of the requirements. There is a significant opportunity for IT play a leading role in business today. However, it goes beyond just the nuts and bolts of support and technology. It requires IT to get more involved in understanding how business units operate and proactively seek opportunities to advance their objectives. It requires IT to reach beyond the cultural norms that have been built over the past 10, 20, 30 years. A new type of IT organization is required. A fresh coat of paint won’t cut it. Change is hard, but the opportunities are significant. This is more of a story about moving from a reactive state to a proactive state for IT. It does require a significant change in the way IT operates for many. That includes both internally within the IT organization and externally in the non-IT organizations. The opportunities can radically transform the value IT brings to driving the business forward.


Fintech is disrupting big banks, but here’s what it still needs to learn from them

Fintech is disrupting big banks, but hereĆ¢€™s what it still needs to learn from them
Though it’s definitely possible to grow while managing risk intelligently, it’s also true that pressure to match the “hockey-stick” growth curves of pure tech startups can lead fintechs down a dangerous path. Startups should avoid the example of Renaud Laplanche, former CEO of peer-to-peer lender Lending Club, who was forced to resign in 2016 after selling loans to an investor that violated that investor’s business practices, among other accusations of malfeasance. It’s not just financial risk that they may manage badly: the sexual harassment scandal that recently rocked fintech unicorn SoFi shows that other types of risky behavior can impact bottom lines, too. While it might be common for pure tech startups to ask forgiveness, not permission, when it comes to the tactics they use to expand, fintechs should be aware that they’re playing in a different, more risk-sensitive space. Here again, they can learn from banks — who will also, coincidentally, look for sound risk management practices in all their partners. Since the 2008 crisis, financial institutions have increasingly taken a more holistic approach to risk as the role of the chief risk officer (CRO) has broadened.


The Banking Industry Sorely Underestimates The Impact of Digital Disruption


According to McKinsey, many organizations underestimate the increasing momentum of digitization. This includes the speed of technological changes, resultant behavioral changes and the scale of disruption. “Many companies are still locked into strategy-development processes that churn along on annual cycles,” states McKinsey. “Only 8% of companies surveyed said their current business model would remain economically viable if their industry keeps digitizing at its current course and speed.” Most importantly, McKinsey found that most organizations also underestimate the work that is needed to transform an organization for tomorrow’s reality. Much more than developing a better mobile app, organizations need to transform all components of an organization for a digital universe. If this is not done successfully, an organization risks being either irrelevant to the consumer or non-competitive in the marketplace … or both. Complacency in the banking industry can be partially blamed on the fact that digitization of banking has only just begun to transform the industry. No industry has been transformed entirely, with banking just beginning to realize core changes.


Why tech firms will be regulated like banks in the future


First, we have become addicted to technology. We live our lives staring at our devices rather than talking to each other or watching where we are going. It’s not good for us and, according to Pew Research, is responsible for more and more suicides, particularly amongst the young. Pew analysis found that the generation of teens they call the “iGen” – those born after 1995 – is much more likely to experience mental health issues than their millennial predecessors. Second is privacy. Facebook and other internet giants are abusing our privacy rights in order to generate ad revenues, as demonstrated by Cambridge Analytica, but they’re not the only one. Google, Alibaba, Tencent, Amazon and more are all making bucks from analyzing our digital footprints and we let them because we’re enjoying it, as I blogged about recently. Third is that the power of these firms is too much. When six firms – Google (Alphabet), Amazon, Facebook, Tencent, Alibaba and Baidu – have almost all the information on all the citizens of the world held digitally, it creates a backlash and a fear. I’ve thought about this much and believe that it will drive a new decentralized internet


Ericsson and SoftBank deploy machine learning radio network

"SoftBank was able to automate the process for radio access network design with Ericsson's service. Big data analytics was applied to a cluster of 2,000 radio cells, and data was analysed for the optimal configuration." Head of Managed Services Peter Laurin said Ericsson is investing heavily in machine learning technology for the telco industry, citing strong demand across carriers for automated networks.To support this, Ericsson is running an Artificial Intelligence Accelerator Lab in Japan and Sweden for its Network Design and Optimization teams to develop use cases. In introducing its suite of network services for "massive Internet of Things" (IoT) applications in July last year, Ericsson had also added automated machine learning to its Network Operations Centers in a bid to improve the efficiency and bring down the cost of the management and operation of networks. According to SoftBank's Tokai Network Technology Department radio technology section manager Ryo Manda, SoftBank is now collaborating with Ericsson on rolling out the solution to other regions. "We applied Ericsson's service on dense urban clusters with multi-band complexity in the Tokai region," Manda  said.


Models and Their Interfaces in C# API Design


A real data model is deterministically testable. Which is to say, it is composed only of other deterministically testable data types until you get down to primitives. This necessarily means the data model cannot have any external dependencies at runtime. That last clause is important. If a class is coupled to the DAL at run time, then it isn’t a data model. Even if you “decouple” the class at compile time using an IRepository interface, you haven’t eliminated the runtime issues associated with external dependencies. When considering what is or isn’t a data model, be careful about “live entities”. In order to support lazy loading, entities that come from an ORM often include a reference back to an open database context. This puts us back into the realm of non-deterministic behavior, where in the behavior changes depending on the state of the context and how the object was created. To put it another way, all methods of a data model should be predicable based solely on the values of its properties. ... Parent and child objects often need to communicate with each other. When done incorrectly, this can lead to tightly cross-coupled code that is hard to understand.



Quote for the day:
"Strategy Execution is the responsibility that makes or breaks executives" -- Alan Branche and Sam Bodley

Daily Tech Digest - May 20, 2018

Understanding how Design Thinking, Lean and Agile Work Together


Lean started out as a response to scientific management practices in manufacturing. Organisations sought efficiency through processes, rules, and procedures and management was mostly about control. But in modern business, control is a falsehood. Things are too complex, too unpredictable, and too dynamic to be controlled. Lean offers a different mindset for managing any system of work. It’s fundamentally about exploring uncertainty, making decisions by experimenting and learning and empowering people who are closest to the work to decide how best to achieve desired outcomes. Lean says be adaptive, not predictive. Agile is related to Lean. The differences are mostly about what these mindsets are applied to, and how. In conditions of high uncertainty, Agile offers ways to build software that is dynamic and can adapt to change. This isn’t just about pivoting. It’s also about scaling and evolving solutions over time. If we accept that today’s solution will be different from tomorrow’s, then we should focus on meeting our immediate needs in a way that doesn’t constrain our ability to respond when things change later. The heart of Agile is adapting gracefully to changing needs with software.



How to write a GDPR-compliant data subject access request procedure

Recital 63 of the GDPR states, “a data subject should have the right of access to personal data which have been collected concerning him or her, and to exercise that right easily and at reasonable intervals, in order to be aware of, and verify, the lawfulness of the processing.” The procedure for making and responding to subject access requests remains similar to most current data protection laws, but there are some key changes you should be aware of under the GDPR: In most circumstances, the information requested must be provided free of charge. Organisations are permitted to charge a “reasonable fee” when a request is manifestly unfounded, excessive or repetitive. This fee must be based on the administrative cost of providing the information.
Information must be provided without delay and within a month. Where requests are complex or numerous, organisations are permitted to extend the deadline to three months. However, they must still respond to the request within a month to explain why the extension is necessary.



New Robot Swims With No Motor and No Battery

New Robot Swims With No Motor and No Battery
This polymer’s movements activate a switch in the robot's body attached to a paddle. Once the paddle is triggered it behaves like a rowing paddle pushing the small robot forward. The polymer strips have the unique ability to behave differently according to their thickness. Therefore, the engineers used a variety of strip sizes to generate different responses at different times resulting in robots that can swim at different speeds and directions. The engineers were even successful in making the mini robots drop off a payload. "Combining simple motions together, we were able to embed programming into the material to carry out a sequence of complex behaviors," says Caltech postdoctoral scholar Osama R. Bilal, co-first author of the paper.  The team is now researching adding other functionalities to the robots such as responding to environmental cues like pH or salinity. They are also looking into redesign the devices to be self-resetting according to temperature shifts, enabling the robots to swim forever. The engineers have some very ambitious potential projects for their little machines such as delivering drugs and even containing chemical spills. This week was an important one of the autonomy of small robots.


The Fintech Files: Do we expect too much from AI?

While many think AI is the future, lots of financial institutions are using it to tidy up the past. Innovation labs and centres of excellence sound impressive, but behind the grand declarations, experts know that with AI the output is only as good as the input: you need solid data for it to work. And that’s the main problem for so many institutions. Internal data is often messy. For now, Nomura’s AI lab is designed to tackle just that; dealing with historical data as well as making sure all live data is stored consistently so it can be used for analytics in the future. Something else that is holding up progress: Unlike collaborations in blockchain, we haven’t seen many banks teaming up on AI yet. Hardly surprising, given its potential competitive advantages. But it makes it hard to gauge what City firms have achieved so far, Mohideen said. The AI hype started off with visions of robots taking over the trading floor. At the moment, most of them are still just doing admin.


GDPR in real life: Fear, uncertainty, and doubt

why-the-new-data-laws-are-good-for-ux-25-638.jpg
"Most industries face an ambitious regulatory agenda, and have been doing so for years. When considering GDPR two things happened: Firstly, it was de-prioritized in relation with other topics with an earlier deadline, secondly organizations have been -- across the board -- underestimating the impact of the new legislation on processes and systems. When GDPR was eventually picked up in a structural way, it has become increasingly clear to most organizations that, although they will be able to put into place policies and processes, the long tail will be in the implementation of various aspects into the (legacy) IT landscape. This is bound to be a large part of the post May 25 backlog for most of them. Strategically not complying should be a thing of the past, where previous legislation would in the worst case fine relatively small amounts. GDPR more fundamentally will become part of the license to operate with serious implications, both monetary as well as reputational. The biggest fear for heads of communication and board members alike is becoming the showcase in the media in the coming weeks, months."


Optimistic about AI and the future of work

Despite what you may have heard elsewhere, the future of work in a world with artificial intelligence (AI) is not all doom and gloom. And thanks to a research-backed book from Malcolm Frank, What to Do When Machines Do Everything, we have data to prove it. Also, thanks to new educational approaches, we are better equipped to prepare students and misplaced workers for a future with AI. All of these topics were covered at Cornell’s Digital Transformation Summit, where my colleague Radhika Kulkarni and I spoke alongside Frank and some of our country’s top educational leaders. Frank, Executive VP of Strategy and Marketing at Cognizant, says we’re experiencing the fourth industrial revolution. He anticipates that the percentage of job loss from AI will correspond with job loss rates during other periods of automation throughout history, including automation through looms, steam engines and assembly lines. Fundamentally, workforce changes from AI will be like those during the industrial revolution and the introduction of the assembly line. About 12 percent of jobs will be lost. Around 75 percent of jobs will be augmented. And there will be new jobs created.


Gatwick Airport embraces IoT and Machine Learning


Experts are good at finding great ways to utilize limited resources, which is particularly important at Gatwick. When aided by IT however, they can do even more. Machine-learning can detect busy areas in the airport through smartphones and tracking these results over the long term can provide key insights into optimizing day-to-day operations. When making decisions, Gatwick’s management will be aided with powerful data that can provide insights not attainable with more traditional technologies, and new the IT infrastructure will be a key to this analysis. Facial recognition technology will boost security as well as track late passengers, and personalized services based on smartphones or wearable technology can provide valuable updates to travellers on a personal level. Dealing with lost baggage can be a time-consuming and often stressful process. Armed with its new IT infrastructure, Gatwick and its airline operators are poised to offer a better alternative. Being able to track luggage and its owners creates new opportunities for simplifying the check-in and baggage claim process, helping get travellers in and out the the airport in a prompt and seamless manner.


What Is The Future Of Cryptocurrencies And Blockchain?

The cryptocurrency market is highly volatile. More than anything, currently they are serving the investors a speculative instrument. There are huge risks involved and many nations have been restraining it in some way or the other. Many scholars and industrial leaders have stated that Bitcoin and many other cryptocurrencies are some sort of Ponzi schemes. So when asked about what the future of cryptocurrencies looks like, Simon is quite optimistic about it. Rather than focusing upon their day to day movements, if one looks at the big picture, then it is quite evident that these currencies serve some particular function. ... Another important point of observation is that even though more and more nations are banning some or the other cryptocurrency, especially in Southeast Asia, more money has already been invested in the ICOs in the first four months of 2018 than the whole of 2017. This clearly indicates that more and more institutional money has been moving to the crypto world.


Governance and culture will differentiate banks in 2018

It is probably no longer enough for board members to just have a simple understanding of regulatory updates like VAT or IFRS 9. They also need to better understand how emerging technologies such as fintech, regulatory technology (regtech), blockchain, can contribute to disrupt future banking operations and the financial sector in general. Boards of directors at banks still have ample opportunities to reshape their business models, and by leveraging technology, they can develop products that create further competitive advantage. To strengthen governance by the board even further, banks must also develop a skills matrix that includes knowledge and experience in financial reporting and internal controls, strategic planning, risk management, and corporate governance standards. The skills matrix, which should ideally be completed by the board, can include specialist requirements for capital markets, risk management, audit, finance, regulatory compliance and information technology (IT). Some banks in the UAE have started to appoint directors in accordance with the expertise requirements in the matrix.



This cryptocurrency phishing attack uses new trick to drain wallets

Researchers note that MyEtherWallet is an appealing target for attackers because it is simple to use, but its lack of security compared to other banks and exchanges make it a prominent target for attack. Once the user hits a MEWKit page, the phishing attack gets underway with credentials including logins and the private key of the wallet being logged by the attackers. After that, the crooks look to drain accounts when the victim decrypts their wallet. The scam uses scripts which automatically create the fund transfer by pressing the buttons like a legitimate user would, all while the activity remains hidden -- it's the first time an attack has been seen to use this automated tactic. The back end of MEWKit allows the attackers to monitor how much Ethereum has been collected, as well as keeping a record of private user keys and passwords which can potentially be used for further attacks. Those behind MEWKit appear to have been active for some time and have carried out some sophisticated campaigns. Researchers say MewKit demonstrates a "new dedicated effort from threat actors to pursue cryptocurrency"



Quote for the day:


"Do not listen to those who weep and complain, for their disease is contagious." -- Og Mandino


Daily Tech Digest - May 18, 2018

businessman bridges gap
“Elicitation of requirements and using those requirements to get IT onboard and understand what the client really wants, that’s one of the biggest responsibilities for BAs. They have to work as a product owner, even though the business is the product owner,” Gregory says. “[They need to ask:] What do the systems need to do, how do they do it, who do we need to get input from, and how do we get everyone to agree on what we need to do before we go and do it? The BA’s life revolves around defining requirements and prioritizing requirements and getting feedback and approval on requirements,” says Jeffrey Hammond, vice president and principal analyst at Forrester Research. The role of a business analyst is constantly evolving and changing – especially as companies rely more on data to advise business operations. Every company has different issues that a business analyst can address, whether it’s dealing with outdated legacy systems, changing technologies, broken processes, poor client or customer satisfaction or siloed large organizations.



Why AI is the perfect software testing assistant

Software testers are highly analytical, creative problem solvers. To identify hidden defects and areas where users might get frustrated, they must ask what others haven't asked and see what others don't see. But the analytical process takes time, and it isn't always as efficient as today's businesses and the users of their software demand. Artificial intelligence (AI), and its ability to search data sets for golden nuggets, could really come in handy here. An AI tool could quickly locate tests that have already been written to cover a particular scenario or new line of code. The system could even tell testers which test cases are most appropriate for the requirement. Over time, an AI tool could even pinpoint what might be causing the bugs that those tests find, based on past data. When combined with testers' wealth of knowledge about the product and its users, AI has the potential to significantly increase testing efficiency. ... We are beginning to see a few AI-enhanced testing tools hit the market now; initial capabilities include highlighting areas of risk that need further testing or that weren't covered at all. There will be many more advanced tools released in the coming months and years.


Blockchain technology lacks enough use cases to be disruptive, says Worldpay


A lack of strong use cases for blockchain is preventing the technology from disrupting the financial services industry, according to Worldpay. The payment company’s head of technology operations, Jason Scott-Taggart, said the organisation had not ruled out using blockchain in future, but the technology still has some way to go. “You’d be surprised, but in payments blockchain is not as disruptive as people assume it is. There’s not a lot of demand for cryptocurrencies, and blockchain as a technology is not something we have seen a good application for in what we do yet,” he told Computer Weekly in an interview at the ServiceNow Knowledge 18 conference. His view echoes research from Gartner, which found just 1% of CIOs are currently undertaking blockchain projects and 8% plan to start one in the short term. The analyst firm’s vice-president, David Furlonger, said the technology was “massively hyped” and warned “rushing into blockchain deployments could lead organisations to significant problems of failed innovation, wasted investment [and] rash decisions”.


Improve the rapid application development model for deployment readiness

An increasing number of enterprises adapt rapid application development tools rather than reworking their DevOps toolchain. Kubernetes, Marathon and other container orchestration platforms easily combine with continuous integration tools such as Jenkins to make every stage of rapid development, from unit testing through production, part of an explicit flow. The move from idea to prototype is defined in rapid development terms, using rapid development tools. Jenkins, Buildbot, CruiseControl and similar tools frame production as a stage of rapid or continuous development. At each stage, they link to container orchestration for deployment. Simply hosting application code in containers does not guarantee that the orchestration practices for each stage will be comparable, but it does organize the process overall. Containers, and a single orchestration tool, provide commonality across all stages of rapid application development to ensure that every stage is tested, including the transition to production.. The rapid application development model, in both setups, is a string of testing and integration phases linked together.


Adware bundle makes Chrome invisible to launch cryptojacking attacks

screen-shot-2018-05-17-at-12-48-33.jpg
Known as cryptojacking, this practice involves the use of often-legitimate mining scripts which are deployed on browsers without user consent, before funneling the proceeds to mining pools controlled by threat actors. According to the publication, the bundle creates a Windows autorun which launches the Google Chrome browser -- in a way which is invisible. By using specific code to launch the browser, the software forces Chrome to launch in an invisible, headless state. The browser then connects to a mining page whenever the user logs into Windows. This page launches the CoinCube mining script that steals processing power to mine Monero. CPU usage may spike to up to 80 percent, and while victims may notice their PCs are slow, it could be a very long time before the software is uncovered and removed -- or users may simply blame Chrome as the oddity. The researcher opened the website page responsible for the script in a standard browser window and came across an interesting element of the script; the page masquerades as a Cloudflare anti-DDoS page.


Telegrab: Russian malware hijacks Telegram sessions

Cisco Talos researchers Vitor Ventura and Azim Khodjibaev dubbed the malware Telegrab. They analyzed two versions of it. The first one, discovered on April 4, 2018, only stole browser credentials, cookies, and all text files it can find on the system. The second one, spotted less than a week later, is also capable of collecting Telegram’s desktop cache and key files and login information for the Steam website. To steal Telegram cache and key files, the malware is not taking advantage of software flaws. The malware is capable of targeting only the desktop version of the popular messenger because it does not support Secret Chats and does not have the auto-logout feature active by default. This means that the attacker can use those stolen files to access the victim’s Telegram session (if the session is open), contacts and previous chats. Telegrab is distributed via a variety of downloaders, and it checks if the victim’s IP address is part of a list that includes Chinese and Russian IP addresses, along with those of anonymity services in other countries. If it is, it will exit.


Blockchain will be the killer app for supply chain management in 2018

blockchain maersk ibm
Private or "permissioned" blockchains can be created within a company's four walls or between trusted partners and centrally administered while retaining control over who has access to information on the network. Blockchain can also be used between business partners, such as a cloud vendor, a financial services provider and its clients. Bill Fearnley, Jr., research director for IDC's Worldwide Blockchain Strategies, recently returned from visiting company clients in China where he found "everybody wanted to talk about supply chain. "If you build a blockchain ledger within [a single company] that has a certain value," Fearnley said. "The real value for blockchain is when you use distributed electronic ledgers and data to connect with suppliers, customers and intermediaries." One major challenge with supply chain management today involves trade finance record keeping, because a lot of trade finance record keeping is still based on inefficient systems: including faxes, spreadsheets, emails, phone calls and paper.


Zara concept store greets shoppers with robots and holograms

At Zara’s new flagship store in London, shoppers can swipe garments along a floor-to-ceiling mirror to see a hologram-style image of what they’d look like as part of a full outfit. Robot arms get garments into shoppers’ hands at online-order collection points. iPad-wielding assistants also help customers in the store order their sizes online, so they can pick them up later. “Customers don’t differentiate between ordering online or in a store,” spokesman Jesus Echevarria Hernandez said. “You need to facilitate that as best as you can.” The store, which opened Thursday, shows how retailers are increasingly blending online and bricks-and-mortar shopping in a bid to keep up with the might of Amazon.com Inc. Inditex SA, the Spanish company that owns Zara, calls it an example of the technologies it will implement around the world. ... Amazon is moving the other way, building out its physical retail presence. Not only has it acquired grocer Whole Foods Market Inc., it has opened Amazon Go convenience stores, which use artificial intelligence and video cameras in lieu of checkouts, in several U.S. cities.


Icinga Enterprise-Grade Open-Source Network Monitoring That Scales

analytics network monitoring
Icinga runs on most of the popular Linux distros and the vendor provides detailed installation instructions for Ubuntu, Debian, Red Hat (including CentOS and Fedora) and SUSE/SLES. Icinga does not publish specific hardware requirements, but our installation ran well on a quad-core processor with 4 GB RAM and this is probably be a good starting point for a basic installation. ... As with most monitoring applications, storage is an important variable that largely depends on the number of hosts and services monitored and how often information is written to the log. With too little storage, the logs can easily fill up and freeze the system. We were able to quickly install Icinga on Ubuntu 16.04 LTS with just a few simple commands at the prompt. The first step was to download the necessary files to the local repository, and then install the actual Icinga application. Icinga can be used to monitor the availability of hosts and services from switches and routers as well as a variety of network services like HTTP, SMTP and SSH.


CISO soft skills in demand as position evolves into leadership role

You need to be able to understand what engineering is trying to do and what their goals are, what marketing and procurement are doing, what the customer is trying to do and what their goals are. If you can't empathize with what their goals and challenges are, you can't influence. So much flows from that: Your communication skills and communication style will flow from empathy. You also need to be understanding of what we call the data subject -- the consumer who doesn't understand what's happening to their data -- and having empathy for them, as well as empathy with all the stakeholders. It's empathizing with everybody and making the wisest decision to push for the best outcome you can. ... It's important for at least two different reasons. One, from a practical perspective, I've talked a lot about the skills gap. If we're blocking 50% of the planet from joining this career path, we're really contributing to our biggest challenge. Then the other part: Women across the globe are economically oppressed, and information security is a lucrative field. I want to get women into the information security field so they can be financially independent and make a good living.



Quote for the day:


"Leadership - leadership is about taking responsibility, not making excuses." -- Mitt Romney


Daily Tech Digest - May 17, 2018

7 Basic Rules for Button Design


Every item in a design requires effort by the user to decode. Generally, the more time needed for users to decode the UI the less usable it becomes for them. But how do users understand whether a certain element is interactive or not? They use previous experience and visual signifiers to clarify the meaning of the UI object. That’s why it so important to use appropriate visual signifiers (such as size, shape, color, shadow, etc.) to make the element look like a button. Visual signifiers hold an essential information value — they help to create affordances in the interface. Unfortunately, in many interfaces the signifiers of interactivity are weak and require interaction effort; as a result, they effectively reduce discoverability. If clear affordances of interaction are missing and users struggle with what is “clickable” and what is not, it won’t matter how cool we make the design. If they find it hard to use, they will find it frustrating and ultimately not very usable. Weak signifiers is an even more significant problem for mobile users. In the attempt to understand whether an individual element is interactive or not, desktop users can move the cursor on the element and check whether the cursor changes its state. Mobile users don’t have such opportunity.



Serverless deployment lifts enterprise DevOps velocity


At a tipping point of serverless expertise, enterprises will start to put existing applications in serverless architectures as well. Significant challenges remain when converting existing apps to serverless, but some mainstream companies have already started that journey. Smart Parking Ltd., a car parking optimization software maker based in Australia, moved from its own data centers in Australia, New Zealand and the U.K. to AWS cloud infrastructure 18 months ago. Its next step is to move to an updated cloud infrastructure based on Google Cloud Platform, which includes Google Cloud Functions serverless technology, by June 2018. "As a small company, if we just stayed with classical servers hosted in the cloud, we were doing the same things the same way, hoping for a different outcome, and that's not realistic," said John Heard, CTO at Smart Parking. "What Google is solving are the big questions around how you change your focus from writing lots of code to writing small pieces of code that focus on the value of a piece of information, and that's what Cloud Functions are all about," he added.


3 reasons why hiring older tech pros is a smart decision

istock-869391194.jpg
For software engineers in particular, experience counts a lot, Matloff said. "The more experienced engineers are far better able to look down the road, and see the consequences of a candidate code design," he added. "Thus they produce code that is faster, less bug-prone, and more extendible." And in data science, recent graduates may know a number of techniques, but often lack the ability to use them effectively in the real world, Matloff said. "Practical intuition is crucial for effective predictive modeling," he added. Older tech workers also typically have more experience in terms of management and business strategy, Mitzner said. Not only can they offer those skills to the company, they can also act as mentors to younger professionals and pass on their knowledge, she added. "Most people who have been successful in their career would say that they had a great mentor," Mitzner said. "If you have a business that's all 20s to 30s, you could be really missing out on that." Many older employees also appreciate the same flexibility that younger workers do, as they balance work and home life with aging parents and children reaching adulthood, said Sarah Gibson, a consultant with expertise on changing generations in the workforce.


Computes DOS: Decentralized Operating System

“Computes is more like a decentralized operating system than a mesh computer,” replied one of our most active developer partners yesterday. He went on to explain that Computes has all of the components of a traditional computer but designed for decentralized computing. The more I think about it — the more profound his analogy is. We typically describe Computes as a decentralized peer-to-peer mesh supercomputing platform optimized for running AI algorithms near realtime data and IoT data streams. Every machine running the Computes nanocore agent can connect, communicate, and compute together as if they were physical cores within a single software-defined supercomputer. In light of yesterday’s discussion, I believe that we may be selling ourselves short on Computes’ overall capabilities. While Computes is uniquely well positioned for enterprise edge and high performance computing, most of our beta developers seem to be building next generation decentralized apps (Dapps) on top of our platform.


GDPR impact on Whois data raising concern


Cyber criminals typically register a few hundred, even thousands, of domains for their activities, and even if fake details are used, registrants have to use a real phone number and email address, which is enough for the security community to link associated domains. Using high-speed machine-to-machine technology and with full access to Whois data, Barlow said organisations such as IBM were able to block millions of spam messages or delay activity coming from domains associated with the individuals linked to spam messages. While the GDPR is designed to enhance the privacy of individuals, it is having the unintended effect of encouraging domain registrars not to submit registration details to the RDS, which means the information is incomplete and of less value to cyber crime fighters. Without access to Whois data, IBM X-Force analysts predict it might take more than 30 days to detect malicious domains by other methods, leaving organisations at the mercy of cyber criminals during that period.


Brush up on microservices architecture best practices


Isolating and debugging performance problems is inherently harder in microservice-based applications because of their more complex architecture. Therefore, productively managing microservices' performance calls for having a full-fledged troubleshooting plan. In this follow-up article, Kurt Marko elaborated on what goes into successful performance analysis. Effective examples of the practice will incorporate data pertaining to metrics, logs and external events. To then make the most use of tools like Loggly, Splunk or Sumo Logic, aggregate all of this information into one unified data pool. You might also consider a tool that uses the open source ELK Stack. Elasticsearch has the potential to greatly assist troubleshooters in identifying and correlating events, especially in cases where log files don't display the pertinent details chronologically. The techniques and automation tools used for conventional monolithic applications aren't necessarily well-suited to isolate and solve microservices' performance problems.


More Attention Needs to be on Cyber Crime, Not Cyber Espionage

Cyber crime remains a global problem that continues to be innovative and all-encompassing. What’s more, cyber crime doesn’t focus solely on organizations but also on individuals. The statistics demonstrates the magnitude of the cyber crime onslaught. According to a 2017 report by one company, damages incurred by cyber crime is expected to reach USD $6 trillion by 20201. Conversely, cyber security investment is only expected to reach USD $1 trillion by 2021, according to the same source. Furthermore, data breaches continue to afflict individuals. During the first half of 2017, more than 2 billion records were victim of cyber theft, whereas “only” 721 million records were lost during the last half of 2016, a 164 percent increase. According to another reputable source, among the three major classifications of breaches impacting people were identity theft (69 percent), access to financial data (15 percent), and access to accounts (7 percent). With cyber crime communities existing all over the world, these groups and individuals offer professional business goods and services based on quality and reputation that serves to quickly weed out inferior performers, innovation and dependability are instrumental to success.


Data integrity and confidentiality are 'pillars' of cybersecurity


There are two pillars of information security: data integrity and confidentiality. Let's take a simple example: your checking account. Integrity means the number. When you go to an ATM or online or to a teller and check your balance, that number should be easily agreed upon by you and your bank. There should be a clear ledger showing who put money in, when and how much, and who took money out, when and how much. There shouldn't be any randomness; there shouldn't be people putting money in or taking money out without your knowledge or your permission. So, one pillar is making sure the integrity of information -- the code you're running, the executables of your applications -- should be the same ones the developer wrote. Just like the numbers in your bank account, the code you're running should not be tampered with. Then, there's confidentiality. You and your bank should be the only ones who know the numbers in your bank account. When you take confidentiality away from your checking account, it's the same problems when you apply that to your applications and infrastructure.


This new type of DDoS attack takes advantage of an old vulnerability

"Just like the much-discussed case of easily exploitable IoT devices, most UPnP device vendors prefer focusing on compliance with the protocol and easy delivery, rather than security," Avishay Zawoznik, security research team leader at Imperva, told ZDNet. "Many vendors reuse open UPnP server implementations for their devices, not bothering to modify them for a better security performance." Examples of problems with the protocol go all the way back to 2001, but the simplicity of using it means it is still widely deployed. However, Imperva researchers claim the discovery of how it can be used to make DDoS attacks more difficult to attack could mean widespread problems. "We have discovered a new DDoS attack technique, which uses known vulnerabilities, and has the potential to put any company with an online presence at risk of attack," said Zawoznik. Researchers first noticed something was new during a Simple Service Discovery Protocol (SSDP) attack in April. 


This article focuses on the Module System and Reactive Streams; you can find an in-depth description of JShell here, and of the Stack Walking API here. Naturally, Java 9 also introduced some other APIs, as well as improvements related to internal implementations of the JDK; you can follow this link for the entire list of Java 9 characteristics. The Java Platform Module System (JPMS) – the result of Project Jigsaw – is the defining feature of Java 9. Simply put, it organizes packages and types in a way that is much easier to manage and maintain. In this section, we’ll first go over the driving forces behind JPMS, then walk you through the declaration of a module. Finally, you’ll have a simple application illustrating the module system. ... Module descriptors are the key to the module system. A descriptor is the compiled version of a module declaration – specified in a file named module-info.java at the root of the module’s directory hierarchy. A module declaration starts with the module keyword, followed by the name of the module. The declaration ends with a pair of curly braces wrapping around zero or more module directives.



Quote for the day:


"Added pressure and responsibility should not change one's leadership style, it should merely expose that which already exists." -- Mark W. Boyer


Daily Tech Digest - May 16, 2018


In terms of the legal implications of AI, vicarious liability and agency cannot be applied to AI in the same way as they would for employee liability. Due to the black box nature of AI and the lack of transparency in its reasoning, it is difficult to attribute liability. The Fairchild principles of a 'material increase to risk' could be applied in future to determine liability, but without legislative clarification, the position is not entirely clear. Furthermore, AI can monitor price changes within a market and react very quickly, thereby potentially stifling competition by creating a form of collusion in the market. The European Commission is currently taking the threat of AI in competition seriously and exploring solutions to resolve these types of issues. From an intellectual property perspective, legislation has not been updated to cover the ownership of AI-generated intellectual property. Companies will need to ensure ownership of any materials or intellectual property created by AI vests or is transferred to them. In terms of ethics, the law cannot cover every moral scenario. AI is already creating unintended gender, race and socio-economic bias based on the data it works with.



Force multipliers in cybersecurity: Augmenting your security workforce

Organizations are employing security automation and orchestration technologies to make sure that the right person, with the right data, is there at the right time to make decisions, he said. In cybersecurity, it is important that the organization is clear about what actions must be taken after an incident occurs. Automation technologies can make changes right away to contain the issue, he added, but just relying on technologies isn't enough to help prepare for today's advanced threats, he added. Organizations should also practice breach preparedness drills to test their response, he stressed.  Implementing these security orchestration and automation practices also relies on strong leadership that develops a team atmosphere, and teaches team members to work together during a crisis, he said. It will be important to exhibit these strong cultural traits during a breach, especially because cybersecurity playbooks can crack under pressure, he added. "People want to practice what it's like to go through a breach," he said. "Security orchestration gives you the technology to respond fast and encourages you to practice it so that [when things go wrong] you're ready."


What is predictive analytics? Transforming data into future insights

What is predictive analytics? Transforming data into future insights
Organizations use predictive analytics to sift through current and historical data to detect trends and forecast events and conditions that should occur at a specific time, based on supplied parameters. With predictive analytics, organizations can find and exploit patterns contained within data in order to detect risks and opportunities. Models can be designed, for instance, to discover relationships between various behavior factors. Such models enable the assessment of either the promise or risk presented by a particular set of conditions, guiding informed decision-making across various categories of supply chain and procurement events. ... While getting started in predictive analytics isn't exactly a snap, it's a task that virtually any business can handle as long as one remains committed to the approach and is willing to invest the time and funds necessary to get the project moving. Beginning with a limited-scale pilot project in a critical business area is an excellent way to cap start-up costs while minimizing the time before financial rewards begin rolling in. Once a model is put into action, it generally requires little upkeep as it continues to grind out actionable insights for many years.


Successful IoT deployment: The Rolls-Royce approach

"The IoT is useful when you know you can derive business benefit by making unknown processes visible," she says. "If you try and use sensors everywhere, you will get nowhere because it's too expensive and it's too imprecise. Rolls-Royce picks the places where its IoT solutions can make data visible, and which will create significant operational benefits. That, for me, is the key to a successful IoT deployment." Gorski advises other digital chiefs to analyse their business operations and understand where a lack of data transparency creates a headache. She has seen big-bang instrumentation projects happen and, for the most part, these are difficult to justify. "They end up being expensive to implement," says Gorski. "It's costly to transmit data and the business ends up with a patchwork quilt of information. It's important to remember there isn't a single solution for IoT instrumentation and you must bootstrap technology together from lots of different suppliers. All that bootstrapping adds costs and creates complexity."


The NHS is failing to deliver 'basic IT', says Matthew Swindells


“We are investing millions of pounds in technology, yet we’ve got six organisations that still can’t tell us what their waiting lists are. It’s not acceptable,” he said. Barts Health NHS Trust, for instance, hasn’t submitted a referral to treatment report to NHS England for nearly four years. “We walk around most hospitals and we’ve not known how many beds we have and how many patients are lying in them,” said Swindells. “We need to at the very least get the data that we capture back out. If we can’t do the basics, me going cap in hand to the treasury for another £10bn to sort IT out just sounds like fool’s money.”  He highlighted e-rostering as another example of failing to use data properly, saying most hospitals use an e-rostering system, which he described as a “glorified spreadsheet” and “expensive pieces of technology that are not enabling better rostering: not enabling the matching of staffing to clinical need, not enabling staff to be flexible about when they work and therefore making more available”.  “We have to make this stuff work well,” said Swindells.


Here’s what the big four U.S. mobile ISPs are doing with IoT

iot services network
The playing field might not remain in its current state for long, with the main issue being the proposed $26.5 billion merger between T-Mobile and Sprint. Partridge said that would be a game-changer for carrier-based IoT in the U.S. “In the consumer business, T-Mobile’s going to be in charge of that, they’ve been wildly successful – but I think in IoT, Sprint will have every opportunity to take the lead,” he said. The idea, after the combination, would be to make acquisitions aimed at strengthening the new company’s position on the enterprise side of service provisioning in general, and focused on IoT particularly, though there are a number of tactical options for pursuing such a strategy. The new company could get into fleet management, a la Verizon and AT&T, snap up IoT software companies and package their offerings into new branded services, move heavily into surveillance and security, or even hardware. “The playbook is fairly open in terms of that, but the goal is to get away from connectivity-only value, because that’s not the place to be,” according to Partridge.


GDPR: Less Than One Month Out, the Top 3 Struggles

Instead of a collective sigh, May 25 might create more of a collective grunt. Most privacy professionals know that although a lot of work has been done in the run-up to D-day, GDPR compliance will require a constant focus. It is a journey, not a final destination. Those organizations that treat May 25 as the endpoint of their compliance drive, will be proven wrong. Another distinction between organizations will be their levels of ambition. Some organizations will look at GDPR as a mere checklist approach, I call it the "lawyer" approach (with all due respect to the lawyers amongst you, including myself). Legal compliance is core, but an organization's ambition should aim to go beyond and create a true cultural change. I truly believe that these privacy leaders ultimately will be rewarded in the market, banking on what I call a "trust dividend", reaping the benefits of constant investments in this space. Even though there is a broad spectrum amongst organizations around GDPR compliance, there are also some common themes and questions. In my role as CA Technologies Chief Privacy Strategist, I have had the opportunity to discuss GDPR with organizations, both public and private. 


Location-based services move beyond mobile and into enterprise apps

Location-based services move beyond mobile and into enterprise apps
The battle for LBS relevance moves from companies that only support increasingly commoditized location data, which they license (e.g., mapping data for GPS), to those that can offer enhanced and supplemental services. Previously seen as an old-style GPS/mapping data company, the largest LBS company, HERE, is moving away from the old model, although not totally. It’s changing from just being a database to being a value-added supplier of a full range of LBS with its Open Location Platform. HERE has several partnerships with auto companies (Audi, BMW) and others (Intel, Oracle, Amazon Web Services, Microsoft) to add platform capabilities beyond their extensive mapping database. Those capabilities include value-added services such as tracking, traffic, safety services, and HD maps. HERE's main cloud-based LBS platform competitor, MapBox, offers similar services but does not include its own mapping database, instead allowing clients to link to their preferred mapping data. HERE and Mapbox have some distinct strategy differences: Mapbox relies on others' data sets and can connect as needed and by user preference. HERE has its own data sets and is looking to add value on top.


Threat analytics: Keeping companies ahead of emerging application threats

Applications which can be downloaded are particularly vulnerable to cyber criminals, as they can be isolated from the network and attacked indefinitely until their defences are broken. Due to so many people using their personal mobile devices for work purposes, a compromised app will not only attack the individual or the business entity that published the app but could also grant attackers access to enterprise networks. Any application on an app store can be downloaded by anyone, and that includes bad actors. If an app is lacking in protection, once downloaded a bad actor might reverse engineer the app leaving it vulnerable to wide-scale tampering; IP/PII theft or API attack. With the code being left so vulnerable, the threat is extremely likely to turn into a widespread attack resulting in a loss of customers, brand damage, lost revenue, and lost jobs. On the other hand, with a threat analytics solution in place from the start, apps can provide valuable insights to the business the moment they are downloaded from an app store, thereby closing the loop.


Optimizing an artificial intelligence architecture: The race is on


Today, most AI workloads use a preconfigured database optimized for a specific hardware architecture. The market is going toward software-enabled hardware that will allow organizations to intelligently allocate processing across GPUs and CPUs depending on the task at hand, said Chad Meley, vice president of analytic products and solutions at Teradata. Part of the challenge is that enterprises use multiple compute engines to access multiple storage options. Large enterprises tend to store frequently accessed, high-value data such as customer, financials, supply chain, product and the like in high-performing, high I/O environments, while less frequently accessed big data sets such as sensor readings, web and rich media are stored in cheaper cloud object storage. One of the goals of composable computing is to use containerization to spin up computer instances such as SQL engines, Graph engines, machine learning engines and deep learning engines that can access data spread across these different storage options. 



Quote for the day:


"The task of leadership is not to put greatness into humanity, but to elicit it, for the greatness is already there." -- John Buchan


Daily Tech Digest - May 15, 2018

Why learning to code won't save you from losing your job to a robot

istock-840333536-1.jpg
That's not to say that we won't have a need for high-level coders, Burton said. Many engineers are solving difficult problems that require creativity, while others are performing important research. However, a lot of software being written today is "essentially glue code," Burton said. "It's putting together pieces that already exist. That's the sort of thing that starts to get automated." Where should people turn instead to future-proof their careers? The humanities, according to Burton. "The humanities start to become very important when you start to realize that technology is going to become very, very easy to use," Burton said. "The toolsets change, but what becomes important is the creativity, and particularly the understanding of the human mind. Because as long as humans are still the consumer, they're going to matter, and they're going to be demanding humans in some areas of the process." One new area that requires human intelligence is determining where humans tolerate technology, Burton said.


Hyperledger Sawtooth: Blockchain for the enterprise

"This isn't just that a node crashes or a third of the nodes on the network crash, but rather something like up to a third of the nodes on the network can be actively trying to corrupt the network, but are unable to do that," Middleton said. "This would be our goal for most deployments when you're putting some sort of business value onto the network. You want to know that it will be resilient to attack." Beyond these capabilities, Hyperledger Sawtooth also features on-chain governance, which uses smart contacts to vote on blockchain configuration settings as the allowed participants and smart contracts. Further, it has an "advanced transaction execution engine" that's capable of processing transactions in parallel to help speed up block creation and validation. But, arguably, one of Sawtooth's most intriguing benefits "is its proof of elapsed time, or PoET, consensus mechanism, which is a novel attempt to bring the resiliency of public blockchains to the enterprise realm -- without forgoing the requirements of security and scale," said Jessica Groopman, industry analyst and founding partner of Kaleido Insights.


The Value of Probabilistic Thinking


It is important to remember that priors themselves are probability estimates. For each bit of prior knowledge, you are not putting it in a binary structure, saying it is true or not. You’re assigning it a probability of being true. Therefore, you can’t let your priors get in the way of processing new knowledge. In Bayesian terms, this is called the likelihood ratio or the Bayes factor. Any new information you encounter that challenges a prior simply means that the probability of that prior being true may be reduced. Eventually, some priors are replaced completely. This is an ongoing cycle of challenging and validating what you believe you know. When making uncertain decisions, it’s nearly always a mistake not to ask: What are the relevant priors? What might I already know that I can use to better understand the reality of the situation? Many of us are familiar with the bell curve, that nice, symmetrical wave that captures the relative frequency of so many things from height to exam scores. The bell curve is great because it’s easy to understand and easy to use. Its technical name is “normal distribution.” If we know we are in a bell curve situation, we can quickly identify our parameters and plan for the most likely outcomes.


A governance perspective on security audit policy settings

One common mistake that administrators make is failing to define adequate audit trails to enable early detection of security threats and allow for related investigations. The main reason for this oversight is a failure to balance audit trail needs and systems capacity. Some administrators argue that excessive auditing results in production of huge amounts of event logs that are unmanageable. Deciding on what to audit and what not to audit, or what may or may not be omitted, is therefore not just a configuration task, but rather a risk assessment task that should be embedded in the governance structures of the organization’s IT security frameworks. The audit needs of the organization are guided by the regulations, security threat models, information required for investigations and IT security policy to which the organization is subjected. Identification of the possible threats that the organization faces is usually carried out as part of risk assessment. Security events derived from audit policy settings are key risk indicators that the organization should use to measure how vulnerable the system is to the identified threats. 


gpu ai gaming
Because of their single-purpose design, GPU cores are much smaller than cores for CPUs, so GPUs have thousands of cores whereas CPUs max out at 32. With up to 5,000 cores available for a single task, the design lends itself to massive parallel processing. ... GPU use in the data center started with homegrown apps thanks to a language Nvidia developed called CUDA. CUDA uses a C-like syntax to make calls to the GPU instead of the CPU, but instead of doing a call once, it can be done thousands of times in parallel. As GPU performance improved and the processors proved viable for non-gaming tasks, packaged applications began adding support for them. Desktop apps, like Adobe Premier, jumped on board but so did server-side apps, including SQL databases. The GPU is ideally suited to accelerate the processing of SQL queries because SQL performs the same operation – usually a search – on every row in the set. The GPU can parallelize this process by assigning a row of data to a single core. Brytlyt, SQream Technologies, MapD, Kinetica, PG-Strom and Blazegraph all offer GPU-accelerated analytics in their databases. 


Sizing Up the Impact of Synthetic Identity Fraud

With recent data breaches and the associated flood of PII onto the dark web, synthetic identity fraud is easier to commit than ever. Credit card losses due to this fraud exceeded $800 million in the U.S. last year, says Julie Conroy, a research director at Aite Group. Perhaps more shocking is just how much of the fraud is going undetected, flying under the radar as credit write-offs. "One of the challenging aspects of this is often it doesn't get recognized as fraud and gets written off as a credit loss; so understanding the scope of the problem has been a challenge," Conroy says in an interview with Information Security Media Group about Aite's latest research. "A number of institutions are starting to see fundamental shifts to things like their credit delinquency curves that are only explainable by synthetic identity fraud." Migigating the risk of synthetic identity fraud is challenging, given that it's designed to look like a real person establishing a credit history. But Conroy suggests that a layered approach can be valuable.


Introducing The Open Group Open Fair Risk Analysis Tool


The tool is designed for international use, with the user able to select local currency units and the order of magnitude (thousands, millions, billions, etc.) relevant to the analysis. Embedded graphs are controlled through intuitive settings, letting analysts and management inspect the relevant results to a lesser or greater level of granularity as required. The tool further informs management by comparing and presenting statistical results such as the average annual loss exposure and user-defined percentile thresholds of loss and chance of exceedance of annual loss. The tool is genuinely versatile, making it equally suitable for the university professor or corporate trainer teaching quantitative risk analysis, as well as experienced corporate risk analysts who need an easy-to-use yet accurate risk evaluator for individual risk questions. In addition, to further support both the tool and the Open FAIR standards, The Open Group has also recently published a Risk Analysis Process Guide which offers some best practices for performing Open FAIR risk analysis, aiming to help risk analysts understand how to apply the Open FAIR risk analysis methodology.


12 Trends Shaping Identity Management

(Image by DRogatnev, via Shutterstock)
If the cybersecurity market is a globe, with each market segment taking its piece - one continent for endpoint security, an archipelago for threat intelligence - where would identity and access management fit? "Identity is its own solar system," says Robert Herjavec, CEO of global IT security firm Herjavec Group, and Shark Tank investor. "Its own galaxy." "The problem with users is that they’re interactive," he explains. The reason identity management is such a challenge for enterprises is because users get hired, get fired, get promotions, access sensitive filesystems, share classified data, send emails with potentially classified information, try to access data we don't have access to, try to do things we aren't supposed to try to do. Set-and-forget doesn't work on us. Brought to you by Mimecast. Luckily, great IAM is getting easier to come by. Herjavec points to identity governance tools like Sailpoint and Saviynt and privileged access management tools like CyberArk, saying that now "not only are they manageable, they’re fundamentally consumable from a price point."


How IoT And IoE Are Positively Disrupting The Farm-To-Fork Industry


Technologies such as UAVs and orbital satellites are becoming necessary for successfully utilizing fields, analyzing crops and providing proper interventions. Today’s technology allows data about extremely specific field observations to be delivered straight to a tablet or computer. From thousands of miles away, landowners can have satellites monitoring their fields and sending instant information on crop health to anyone anywhere in the world. These innovative technologies give farmers the ability to generate pertinent information about the health of their crops and their yield, identify problems and make important and well-educated decisions. Having all of this sensor technology is just one-step in providing food on the table for a constant and ever-growing population. An even bigger step is the implementation of these technologies globally for both developed and underdeveloped countries. Based on the FAO statistics, the nations with the largest population growth rates are also the poorest nations, requiring an even greater need for the technology-based interventions. 


istock 511068092
By accounting for disaster scenarios in your IT service management processes, you can integrate disaster recovery thinking into normal IT operations. This will reduce the possibility of extended system outages in a disaster, as well as provide you with a complete action plan for any incident, large or small. What happens if the disaster is less obvious? Do you know when to escalate those seemingly less harmful incidents and begin to initiate recovery procedures? By integrating your Disaster Recovery Plan into your overall IT service management processes, it becomes much clearer when it’s necessary to invoke disaster recovery procedures, rather than continuing to try and troubleshoot your way out of the situation. Knowledge is power, so the more you know about your systems and what to do in case of failures of any size, the less likely you are to experience a long service interruption. One of the best ways you can start the integration between your Disaster Recovery Plan and your IT service management is by performing a Business Impact Analysis on all your IT systems. 



Quote for the day:


"Thinking is the hardest work there is, which is probably the reason so few engage in it." -- Henry Ford