At least the banks, as far I can tell, didn't say that they weren't tracking people. They merely said nothing about either way. But bank app developers need to remember that banks are in a much more precarious position than Google and they need to at least pretend to be trustworthy in a much more public fashion. Why? Google is still the most effective and comprehensive search engine on the planet. I'd love to be able to say that DuckDuckGo or other privacy-oriented engines are as good or better, but based on daily testing, Google still comes out far ahead. Bing, Yahoo and others long ago lost the search battle to Google. That means that an annoyed Google user can't leave Google without losing some serious search functionality. And on an Android phone, the reliance is even deeper and better integrated. But banks? Not even close. Disgruntled customers can easily take their money and data and move to the rival bank across the street, and they will likely suffer no disruption or degradation of services.
If you lose your Android phone or decide to move to another, there's a decent chance your existing text messages will vanish into the digital ether. That might be fine (and hey, who knows, maybe even a positive thing), but if you do want to back up and save your SMS data, it's pretty painless to do. The simplest way is to use a messaging app that does all the heavy lifting for you. If you have one of Google's Pixel phones, Google's own free Android Messages app will automatically back up some of your messages — up to 25MB worth, according to Google, and only SMS texts (not MMS media messages). It's preinstalled as the default messaging app on your device, so you don't have to do anything to get it up and running. If you're using a phone other than a Pixel — or if you're using a Pixel and want something a bit more robust — the third-party Pulse SMS app is an excellent next-level option. In addition to providing its own universally available automatic cloud backup and sync system, it offers plenty of opportunities for customization
First, there are risks of changing any aspect of IT, as we saw when moving to the PC, LANs, client/server, mobile, and the web—all things that made us rethink IT yet again, as well as drive change that also drives risk. Second, if businesses did not take risk then nothing would change—and they would die. So, the cost of risk should always be offset by the value gained in taking the risk. In the case of cloud computing, its better operational efficiency leads to lower operational costs. And cloud computing also improves business agility to better react to market changes and expand quickly as the business grows. These are all game-changers and value drivers for cloud computing. Third, risk can be reduced with planning. That means taking the time to figure out what your issues are, how technology such as cloud computing can address your issues (if it can), and how to reduce the risks in doing so. Security, for example, is always a risk. But addressed with the right approaches and technologies, youre cloud-based system will actually be more secure than your “as is” on-premises systems.
The usual way that dedupe works is that data to be deduped is chopped up into what most call chunks. A chunk is one or more contiguous blocks of data. Where and how the chunks are divided is the subject of many patents, but suffice it to say that each product creates a series of chunks that will then be compared against all previous chunks seen by a given dedupe system. The way the comparison works is that each chunk is run through a deterministic cryptographic hashing algorithm, such as SHA-1, SHA-2, or SHA-256, which creates what is called a hash. For example, if one enters “The quick brown fox jumps over the lazy dog” into a SHA-1 hash calculator, you get the following hash value: 2FD4E1C67A2D28FCED849EE1BB76E7391B93EB12 If the hashes of two chunks match, they are considered identical, because even the smallest change causes the hash of a chunk to change. A SHA-1 hash is 160 bits. If you create a 160-bit hash for an 8 MB chunk, you save almost 8 MB every time you back up that same chunk. This is why dedupe is such a space saver.
The topic itself is broad and expansive, and the true impact of this segment of computing will be around for generations to come. For strong perspective on where the industry stands in its current state, ISACA’s State of Cybersecurity 2018 research is a must-read. This report provides a great assessment of what needs to happen in the cybersecurity field to move from reactive to proactive. Challenges around cybersecurity are not new and have actually been around since the dawn of computing. However, it is now a topic that everyone talks about. It is a board topic, it is a public safety and livelihood topic, and it is a personal topic. Hitting this trifecta of impact has finally created the sense of urgency and the attention that is needed. Now, the key is that as an industry, as a country, and as a world of over 7 billion people, we need to effectively address these industry challenges to preserve the computing environment for the future.
“CIOs and technology leaders should always be scanning the market along with assessing and piloting emerging technologies to identify new business opportunities with high-impact potential and strategic relevance for their business.” Walker said CIOs or a business decision maker can use predictions like the emerging trends of Hype Cycle as a reality check, helping them to prioritise what areas are likely to become established in the near future. “Some of these capabilities are being delivered in a rapid fashion,” said Walker. Gartner’s predictions show that some technologies, particularly in the AI space such as deep learning, virtual assistants and custom silicon for AI, are likely to become mainstream within two to five years, which does not give CIOs much time to get ready. As an example, Walker said the hospitality sector is being disrupted, such as at the Marriott hotel, which is building service bots to deliver room service.
Behind the scenes, a data classification model should include metadata that sticks with the newly created document throughout its life. This requires an organization to permanently link the document with immutable metadata -- which is where information management systems, such as those from M-Files and FileHold, come into play. By having users choose a template with its associated metadata, data can then be encrypted as required before it hits any storage media, whether that is a local device, an on-site system or a cloud platform. Anything that isn't open/public -- sticking with the example above – will then be encrypted or dealt with using virtual private networks (VPNs). This approach can also help a business determine if data should primarily be held on premises or in the cloud. Once the basic metadata is created in an immutable manner, users can add extra metadata for further classification.
The bandage is the culmination of over six years' work between Tufts and other higher education institutions to create a bandage that includes sensors to monitor a number of markers showing that show how well, or otherwise, a wound is healing, alongside a drug delivery mechanism - all in a form factor that's flexible enough to be wrapped around a wound. "Chronic wounds are a very biologically complex system, and you have to have the bandage interface in very close contact with the wound so you can monitor whether the wound is healing. At the same time, we wanted to find out if there was a way to intervene at the right time to accelerate wound healing," Sameer Sonkusale, professor of electrical and computer engineering at Tufts University's School of Engineering, told ZDNet. The bandage is a combination of a cloth layer and an electronics layer. The electronics layer includes sensors that track the pH and temperature of the wound -- a higher than normal pH or temperature indicates it's not healing well.
Signal noise and distortion have always been behind the limits to traditional (and pretty inefficient) fiber transmission. They’re the main reason data-send distance and capacity are restricted using the technology. Experts believe, however, that if the noise that’s found in the amplifiers used for gaining distance could be cleaned up and the signal distortion inherent in the fiber itself could be eliminated, fiber could become more efficient and less costly to implement. Plus, if fiber could carry more traffic in single strands, it would be cheaper to power, and it would also keep up with rapidly escalating future internet growth. Those two areas of improvement are where many scientists are concentrating their fiber development efforts. The researchers at Chalmers University of Technology and Tallinn University of Technology said they can now send data 4,000 kilometers (nearly 2,500 miles) — or roughly the air-travel distance from Los Angeles to New York.
What makes an algorithm “fair?” Let’s say I have a lot more data besides income - things like credit score, job history, etc. I have a large dataset of past outcomes to train an algorithm for future use. Aiming for accuracy alone will almost definitely result in different treatment of people along age, race, and gender lines. To be fair, should I aim to approve the same percentage of people from each class, even if that means taking some risks? Alternatively, I could train my algorithm to equalize the percentage of people from each class that get approved who actually paid back their loan (the true-positive rate which we can estimate from historical data). A bit of a catch - if I do either of these things, I would have to hold the different groups to different standards. Specifically, I would have to say that I will issue a loan to someone of a certain class, but not to someone else of a different class with the exact same credentials, leading to yet another unfair scenario.
Quote for the day:
"Making those around you feel invisible is the opposite of leadership." -- Margaret Heffernan