The MIT researchers, led by Kalyan Veeramachaneni, proposed a concept they call the Synthetic Data Vault (SDV). This describes a machine learning system that creates artificial data from an original data set. The goal is to be able to use the data to test algorithms and analytical models without any association to the organisation involved. He succinctly states that, “In a way, we are using machine learning to enable machine learning,” The SDV achieves this using a machine learning algorithm called “recursive conditional parameter aggregation” which exploits the hierarchical organisation of the data and captures the correlations between multiple fields to produce a multivariate model of the data. The system learns the model and subsequently produces an entire database of synthetic data. To test the SDV, synthetic data generation for five different public datasets was performed using anti debugging techniques.
Companies will in all likelihood build an infrastructure that uses more than one platform. The company specific business technology platform can easily become a platform of technology platforms. If these platforms, and the applications on top of them, do not play nicely the company will have a hard time to provide customers with positive engagements that leave a good experience. Instead, there will be inconsistent and duplication of data, broken processes, hence frustrated employees and customers; in summary poor experiences for everybody involved, leading to poor experiences. And not concentrating on good experiences leaves companies with missed opportunities, as studies like this one show. ... If there already are platform based business applications available then it is a good option to look at what the underlying AI platform of the vendor is offering, always keeping the answer to the question “What experiences do you want to deliver?” in mind.
The challenge for organizations is pulling all the critical data together into a coherent view of the business that allows them to confidently act on well-founded plans. Increasingly, CFOs are being tasked with not only understanding and communicating financial results but also with helping the organization understand the operational drivers behind them, requiring a more detailed analysis of business KPIs, many of which are non-financial. In fact, a recent (Adaptive Insights CFO Indicator Q3 2016) report found that 76 percent of CFOs are tracking non-financial KPIs, which involves greater collaboration across the organization and data integration to create a holistic view of the business. While it is positive that so many CFOs are taking this step to become a more strategic advisor, this influx of new data can pose a problem when it comes to consolidation.
Reassuringly, many innovators already have the tools to do so: Companies routinely re-engineer competitors’ products, analyze the patent landscape, and conduct interviews with technology experts to guide their operational decisions. We believe innovators can and should also use these same tools and information to guide their strategy by methodically measuring the evolution of product complexity in their space. This requires developing a taxonomy of components by sampling competitors’ products and dissecting not only physical components but also intangible ones like process innovations or business model choices. While we are not aware of any company that is yet explicitly doing this, we do see that many startups implicitly follow this logic by shifting from an impatient minimum viable product (MVP) logic to more patient innovation centered on more complex designs once cash flow and funding have been secured and the space begins to mature.
Independent Cybersecurity Expert, Dr Jessica Barker said: “With so many data breaches hitting the headlines, there can be a sense of defeatism among some organisations. Breaches are seen as inevitable so some organisations question the value of spending on security when it won’t make them 100% secure. However, this research has found that investing in security helps protect the organisation when even the worst happens, as companies with a strong security posture experience much quicker stock price recovery than those with a poor security posture following a data breach.” “In this past year alone we’ve seen high-profile data breaches, such as Yahoo and TalkTalk, experience the significant consequences that a breach can have on shareholder value and brand reputation,” said Bill Mann, senior vice president of products and chief product officer, Centrify.
RSA and Elliptic Curve Cryptography are two classes of algorithms that achieve these characteristics. The keys associated with these algorithms can be serialized and encoded in special files called X.509 certificates. Certificates contain a ton of other information like names, network addresses, and dates. When people refer to certificates, they usually are referring to X.509 certificates that contain public keys. Data (like certificates) can be digitally signed. Digital signatures are encrypted hashes. A good hash function is one that’s hard to reverse and not prone to collisions. If you apply a good hash function to a message, the result of the function does not reveal anything about the original message. Further, a good hash function applied to any two messages are not likely to have the same resultant value (also called a collision).
Should we just dismiss RDF as impractical and RDF stores as inferior software delivered by academics and move on? Maybe not so fast. There are RDF vendors who are entirely professional about what they do, and RDF does have certain things to offer that are not there in other graph data models. We had a discussion with Vassil Momtchev, GraphDB product owner at Ontotext, about the benefits and use cases of RDF. Ontotext's legacy is in text mining algorithms, however these days it's mostly known for GraphDB, its RDF Graph database engine. "Our text mining is backed by a complex semantic network to represent background and extracted knowledge. Back in 2006, we found that none of the existing RDF databases were able to match our requirements for a highly scalable database. This was how GraphDB started," says Momtchev.
The ethereum dream. Just by some code that no one can stop, we’ll bring back money to the people. Not because we dislike banks or governments, but because they too will see it is better for the regulator of the free market, money, itself to be regulated by the free market. That statement isn’t a hypothesis, but the insight of Hayek, a Nobel prize winner who spent his life studying money. So the foundations of this space are built on strong and mainstream grounds, not fringe thoughts as many were led to believe in the previous years. Its walls are futuristic in architecture. We are digitizing money, making it dynamic, turning it into code, while giving it a very primitive level of intelligence in that we can tell it to do things and it does do so. The applications only imagination can constrain and the benefits, for poor or rich, banks and governments or the people, will probably be very considerable. To the point where we might actually get those flying cars in our own lifetime.
The collision between the IoT and blockchain worlds portends some important payments industry developments around the efficient tracking of device payment history, all supported by a ledger of secure data exchanges among devices, web systems and users. Further, this technological convergence also shows promise in terms of the use of smart devices that are programmed to conduct a variety of transactions such as the automatic issuance of invoices and payments. Dan Loomis, vice president and director of mobile product management at the business and financial software firm Intuit, is firmly entrenched in this evolving IoT/blockchain conversation through his work in creating payment experiences for businesses that operate on a global scale, and brought this expertise to the TRANSACT panel discussion.
The finalize() mechanism is an attempt to provide automatic resource management, in a similar way to the RAII (Resource Acquisition Is Initialisation) pattern from C++ and similar languages. In that pattern, a destructor method (known as finalize() in Java) is provided, to enable automatic cleanup, and release of resources when the object is destroyed. The basic use case for this is fairly simple - when an object is created, it takes ownership of some resource, and the object’s ownership of that resource persists for the lifetime of the object. Then, when the object dies, the ownership of the resource is automatically relinquished. Let’s look at a quick simple C++ example that shows how to put an RAII wrapper around C-style file I/O. The core of this technique is that the object destructor method (denoted with a ~ at the start of a method named the same as the class) is used for cleanup:
While free tools sound great, their usefulness varies from business to business. For some organizations, they are helpful means of solving small problems. For others, they are too "siloed" to be effective. "It depends on the environment," says Travis Farral, director of security strategy at Anomali, which is behind the Staxx free threat intelligence tool. "Some are against major deployment of anything open-source that doesn’t have a company behind it, for support or liability issues." Because many free and low-cost tools are designed for specific purposes, they often require advanced technical expertise. Several major businesses use a combination of major enterprise tools and FOSS utilities because they have the staff to support them. For organizations with less staff, siloed tools require security practitioners to become systems integrators because they need to have the solutions work together, says Lee Weiner.
Quote for the day:
"Power concedes nothing without a demand. It never did and it never will." -- Frederick Douglass