The most obvious IoT application for banks is in payments. In a commonly floated scenario, a customer's refrigerator senses the household has run out of milk and orders a fresh carton from the local grocery store. The payment seamlessly takes place in the background. A good experience would incent the customer to use a bank app for this rather than a built-in payment system. Loyalty programs could flow through such an app, and the bank could collect data that could be used in marketing and customer service. ... "Cars are interesting to think about for any number of reasons, not the least of which is because many people spend an inordinate amount of time in their cars," said Dominic Venturo, chief innovation officer at U.S. Bank in Minneapolis.
Facebook says its mission is to make the world more "open and connected," which is fine, but it doesn't quite grab you in the gut like unhooking planet earth from its oil addiction and colonizing the Red Planet. Zuckerberg's stated plans for the philanthropy so far are ambitious because of the sums involved, but not terribly thrilling. He and Chan, a physician, want to cure diseases, improve education, and open up the world to the Internet. The structure of their giving will turbocharge these efforts, by bringing entrepreneurship into the picture in a bigger way.
The center of data gravity is moving with more apps being delivered via cloud Software as a Service (SaaS). In the past, I might only have to extract Salesforce data with other on-premises app data into a client’s on-premises data warehouse. Today there is a constantly growing list of popular cloud app data sources that analytics pros need to include in decision-making processes. If you neglect the ocean of cloud and IoT data sources that your opponents do include in their analytics, you will lose your competitive edge and may miss a key window of opportunity in the hyper-competitive global economy. Don’t believe me? ... Furthermore, there is a priceless peace of mind that comes with knowing someone else is on the hook along with me to make sure everything works. I guess you could say that I have finally seen the cloud light.
They key differentiating point – and the whole premise behind hyper-convergence – is that this model doesn’t actually rely on the underlying hardware. Not entirely at least. This approach truly converges all the aspects of data processing at a single compute layer, dramatically simplifying storage and networking through software-defined approaches. The same compute system now works as a distributed storage system, taking away chunks of complexities in storage provisioning and bringing storage technology in tune with server technology refreshes. Here’s the big piece to remember: since the key aspect of hyper-convergence is software doing the storage controller functionality, it’s completely hardware-agnostic.
Hyper converged infrastructure offer great modern backup methods, including data deduplication, intelligent load balancing, data compression, synthetic backup and rapid snapshots. For example, deduplication is a type of data compression where a single master copy of data is created with subsequent references. It can be seen in virtual desktop infrastructure (VDI) environments that have thousands of users accessing the same applications. Another example is self-healing. In an event that a block storage volume with intensive read/write fails, the system canautomatically rebalance by moving workloads that handle recovery to nodes that are not involved in the event.
When it comes time to implement new technologies or IT-enabled processes, most companies tend to bring all relevant stakeholders to the table. They assemble leaders in the EA department and members of the finance department and the strategy team, as well as the software-development group, to vet options and come to a decision about which changes to make and how. This approach is useful for ensuring that all perspectives are heard and that all system requirements are accounted for. But when disagreements occur, those around the table will tend to deflect blame onto the EA department and absolve themselves of responsibility—particularly when multimillion-dollar IT infrastructure updates and system replacements are at stake.
A first and important step to promoting consistency is having a long-term vision for the enterprise portfolio. Being able to describe both the current state and future state architectures is essential to bringing projects in line. Start by assessing the current portfolio. Map out what systems exist and what they do. This does not need to be deeply detailed or call out individual servers. Instead, focus on applications and products and how they relate. Multiple layers may be required. If the enterprise is big enough, break the problem down into functional areas and map them out individually. If there is an underlying architectural pattern or strategy, identify it and where it has and has not been followed.
Cyber governance of a company is now key. For an FI, this not only means ensuring its own cyber policies and systems are in place, but also that of its service providers. So, a service provider due diligence is a key requirement. Many FIs depend on third party service providers for services such as administrative, trading, custodial, data storage (including cloud solutions), human resources and technology services. Depending on the level of risk, due diligence can range from and include contract reviews, due diligence questionnaires on IT security, staff training, business continuity plans and cyber breach incident response plans and cyber security audits, existence and extent of cyber insurance, to onsite visits.
The disk space intended for a data file in a database is logically divided into pages numbered contiguously from 0 to n. In SQL Server, the page size is 8 KB. This means SQL Server databases have 128 pages per megabyte. Disk I/O operations are performed at the page level. That is, SQL Server reads or writes whole data pages. The more compact data types is used, the less pages for storing that data are required, and as a result, less I/O operations needed. Introduced in SQL Server, buffer pool significantly improves I/O throughput. The primary purpose of the SQL buffer pool is to reduce database file I/O and improve the response time for data retrieval.
In today’s fast-paced digital economy, it is understood that effective data management strategies can have significant impact.CA’s global study of senior IT and business executives on the role of software as a business enabler found that “digital transformation” is underway as a coordinated strategy among more than half of the survey participants. The top 14% of respondents were identified as “digital disruptors” and, according to the survey, have two times higher revenue growth than mainstream organizations and two-and-a-half times higher profit growth than mainstream enterprises.
Quote for the day:
“Before you make a decision, ask yourself this question: will it result in regret or joy in the future?” -- Rob Liano