That's not to say that we won't have a need for high-level coders, Burton said. Many engineers are solving difficult problems that require creativity, while others are performing important research. However, a lot of software being written today is "essentially glue code," Burton said. "It's putting together pieces that already exist. That's the sort of thing that starts to get automated." Where should people turn instead to future-proof their careers? The humanities, according to Burton. "The humanities start to become very important when you start to realize that technology is going to become very, very easy to use," Burton said. "The toolsets change, but what becomes important is the creativity, and particularly the understanding of the human mind. Because as long as humans are still the consumer, they're going to matter, and they're going to be demanding humans in some areas of the process." One new area that requires human intelligence is determining where humans tolerate technology, Burton said.
"This isn't just that a node crashes or a third of the nodes on the network crash, but rather something like up to a third of the nodes on the network can be actively trying to corrupt the network, but are unable to do that," Middleton said. "This would be our goal for most deployments when you're putting some sort of business value onto the network. You want to know that it will be resilient to attack." Beyond these capabilities, Hyperledger Sawtooth also features on-chain governance, which uses smart contacts to vote on blockchain configuration settings as the allowed participants and smart contracts. Further, it has an "advanced transaction execution engine" that's capable of processing transactions in parallel to help speed up block creation and validation. But, arguably, one of Sawtooth's most intriguing benefits "is its proof of elapsed time, or PoET, consensus mechanism, which is a novel attempt to bring the resiliency of public blockchains to the enterprise realm -- without forgoing the requirements of security and scale," said Jessica Groopman, industry analyst and founding partner of Kaleido Insights.
It is important to remember that priors themselves are probability estimates. For each bit of prior knowledge, you are not putting it in a binary structure, saying it is true or not. You’re assigning it a probability of being true. Therefore, you can’t let your priors get in the way of processing new knowledge. In Bayesian terms, this is called the likelihood ratio or the Bayes factor. Any new information you encounter that challenges a prior simply means that the probability of that prior being true may be reduced. Eventually, some priors are replaced completely. This is an ongoing cycle of challenging and validating what you believe you know. When making uncertain decisions, it’s nearly always a mistake not to ask: What are the relevant priors? What might I already know that I can use to better understand the reality of the situation? Many of us are familiar with the bell curve, that nice, symmetrical wave that captures the relative frequency of so many things from height to exam scores. The bell curve is great because it’s easy to understand and easy to use. Its technical name is “normal distribution.” If we know we are in a bell curve situation, we can quickly identify our parameters and plan for the most likely outcomes.
One common mistake that administrators make is failing to define adequate audit trails to enable early detection of security threats and allow for related investigations. The main reason for this oversight is a failure to balance audit trail needs and systems capacity. Some administrators argue that excessive auditing results in production of huge amounts of event logs that are unmanageable. Deciding on what to audit and what not to audit, or what may or may not be omitted, is therefore not just a configuration task, but rather a risk assessment task that should be embedded in the governance structures of the organization’s IT security frameworks. The audit needs of the organization are guided by the regulations, security threat models, information required for investigations and IT security policy to which the organization is subjected. Identification of the possible threats that the organization faces is usually carried out as part of risk assessment. Security events derived from audit policy settings are key risk indicators that the organization should use to measure how vulnerable the system is to the identified threats.
Because of their single-purpose design, GPU cores are much smaller than cores for CPUs, so GPUs have thousands of cores whereas CPUs max out at 32. With up to 5,000 cores available for a single task, the design lends itself to massive parallel processing. ... GPU use in the data center started with homegrown apps thanks to a language Nvidia developed called CUDA. CUDA uses a C-like syntax to make calls to the GPU instead of the CPU, but instead of doing a call once, it can be done thousands of times in parallel. As GPU performance improved and the processors proved viable for non-gaming tasks, packaged applications began adding support for them. Desktop apps, like Adobe Premier, jumped on board but so did server-side apps, including SQL databases. The GPU is ideally suited to accelerate the processing of SQL queries because SQL performs the same operation – usually a search – on every row in the set. The GPU can parallelize this process by assigning a row of data to a single core. Brytlyt, SQream Technologies, MapD, Kinetica, PG-Strom and Blazegraph all offer GPU-accelerated analytics in their databases.
With recent data breaches and the associated flood of PII onto the dark web, synthetic identity fraud is easier to commit than ever. Credit card losses due to this fraud exceeded $800 million in the U.S. last year, says Julie Conroy, a research director at Aite Group. Perhaps more shocking is just how much of the fraud is going undetected, flying under the radar as credit write-offs. "One of the challenging aspects of this is often it doesn't get recognized as fraud and gets written off as a credit loss; so understanding the scope of the problem has been a challenge," Conroy says in an interview with Information Security Media Group about Aite's latest research. "A number of institutions are starting to see fundamental shifts to things like their credit delinquency curves that are only explainable by synthetic identity fraud." Migigating the risk of synthetic identity fraud is challenging, given that it's designed to look like a real person establishing a credit history. But Conroy suggests that a layered approach can be valuable.
The tool is designed for international use, with the user able to select local currency units and the order of magnitude (thousands, millions, billions, etc.) relevant to the analysis. Embedded graphs are controlled through intuitive settings, letting analysts and management inspect the relevant results to a lesser or greater level of granularity as required. The tool further informs management by comparing and presenting statistical results such as the average annual loss exposure and user-defined percentile thresholds of loss and chance of exceedance of annual loss. The tool is genuinely versatile, making it equally suitable for the university professor or corporate trainer teaching quantitative risk analysis, as well as experienced corporate risk analysts who need an easy-to-use yet accurate risk evaluator for individual risk questions. In addition, to further support both the tool and the Open FAIR standards, The Open Group has also recently published a Risk Analysis Process Guide which offers some best practices for performing Open FAIR risk analysis, aiming to help risk analysts understand how to apply the Open FAIR risk analysis methodology.
If the cybersecurity market is a globe, with each market segment taking its piece - one continent for endpoint security, an archipelago for threat intelligence - where would identity and access management fit? "Identity is its own solar system," says Robert Herjavec, CEO of global IT security firm Herjavec Group, and Shark Tank investor. "Its own galaxy." "The problem with users is that they’re interactive," he explains. The reason identity management is such a challenge for enterprises is because users get hired, get fired, get promotions, access sensitive filesystems, share classified data, send emails with potentially classified information, try to access data we don't have access to, try to do things we aren't supposed to try to do. Set-and-forget doesn't work on us. Brought to you by Mimecast. Luckily, great IAM is getting easier to come by. Herjavec points to identity governance tools like Sailpoint and Saviynt and privileged access management tools like CyberArk, saying that now "not only are they manageable, they’re fundamentally consumable from a price point."
Technologies such as UAVs and orbital satellites are becoming necessary for successfully utilizing fields, analyzing crops and providing proper interventions. Today’s technology allows data about extremely specific field observations to be delivered straight to a tablet or computer. From thousands of miles away, landowners can have satellites monitoring their fields and sending instant information on crop health to anyone anywhere in the world. These innovative technologies give farmers the ability to generate pertinent information about the health of their crops and their yield, identify problems and make important and well-educated decisions. Having all of this sensor technology is just one-step in providing food on the table for a constant and ever-growing population. An even bigger step is the implementation of these technologies globally for both developed and underdeveloped countries. Based on the FAO statistics, the nations with the largest population growth rates are also the poorest nations, requiring an even greater need for the technology-based interventions.
By accounting for disaster scenarios in your IT service management processes, you can integrate disaster recovery thinking into normal IT operations. This will reduce the possibility of extended system outages in a disaster, as well as provide you with a complete action plan for any incident, large or small. What happens if the disaster is less obvious? Do you know when to escalate those seemingly less harmful incidents and begin to initiate recovery procedures? By integrating your Disaster Recovery Plan into your overall IT service management processes, it becomes much clearer when it’s necessary to invoke disaster recovery procedures, rather than continuing to try and troubleshoot your way out of the situation. Knowledge is power, so the more you know about your systems and what to do in case of failures of any size, the less likely you are to experience a long service interruption. One of the best ways you can start the integration between your Disaster Recovery Plan and your IT service management is by performing a Business Impact Analysis on all your IT systems.
Quote for the day:
"Thinking is the hardest work there is, which is probably the reason so few engage in it." -- Henry Ford