November 17, 2016
.NET, Azure, Azure IoT Suite, Cloud Services, Cloud to Device, Connectivity, Device Shadow, Device to Cloud, Device Twin, Internet Appliance, Internet of Things, IoT, IoT Hub, machine-to-machine (M2M), Microsoft, Tech-Trends
Internet Of Things, IoT
Today Microsoft has announced general availability of Azure IoT Hub Device Management. With this release Azure IoT Hub subscribers/customers will be able to get access to following features and functionalities:
- Device twin. Use a digital representation of your physical devices to synchronize device conditions and operator configuration between the cloud and device.
- Direct methods. Apply a direct, performant action on a connected device through the cloud.
- Jobs. Broadcast and schedule device twin changes and methods to scale management operations across millions of devices.
- Queries. Create real-time, dynamic reports across device twins and jobs to attest status and health for entire device collections, whether your devices are online or offline.
Event Hubs is a feature within the Azure and is intended to help with the challenge of handling an event based messaging at huge scale. To be specific it is a Highly scalable data streaming platform.
The idea is that if you have apps or devices publishing telemetry events then Event Hubs can be the ingestion point and your can send/push messages to Event Hub. Under the hood Event Hub will create a stream of all of these events which can be read at any time through different ways. This processing of events can happen through Stream processing or direct, and push them for Real-time Analytics or processed message can be stored in to Cold storage for doing historical analytics on your data.
- Event Hubs can ingest and process messages at larger scale, such as millions of messages per second.
- Provides Publish/Subscribe communication capabilities
- Support for AMQP and HTTP protocols
- SAS token based authentication to identify and authenticate event publisher.
- Scalable Through-put units, purchased as needed.
To read more about Event Hubs visit here
October 1, 2016
Architecture, Azure, Cloud Computing, Cloud Services, Horizontal Scaling, Performance, Reliability, Resilliancy, Scalability, Scale Down, Scale In, Scale Out, Scale Up, Software/System Design, Vertical Scaling, Virtualization
When you work with Cloud Computing or normal Scalable highly available applications you would normally hear two terminologies called Scale Out and Scale Up or often called as Horizontal Scaling and Vertical Scaling. I thought about covering basics and provide more clarity for developers and IT specialists.
What is Scalability?
Scalability is the capability of a system, network, or process to handle a growing amount of work, or its potential to be enlarged to accommodate that growth. For example, a system is considered scalable if it is capable of increasing its total output under an increased load when resources (typically hardware) are added.
A system whose performance improves after adding hardware, proportionally to the capacity added, is said to be a scalable system.
This will be applicable or any system such as :
- Commercial websites or Web application who have a larger user group and growing frequently,
- or An immediate need to serve a high number of users for some high profile event or campaign.
- or A streaming event that would need immediate processing capabilities to serve streaming to larger set of users across certain region or globally.
- or A immediate work processing or data processing that requires higher compute requirements that usual for a certain job.
Scalability can be measured in various dimensions, such as:
- Administrative scalability: The ability for an increasing number of organizations or users to easily share a single distributed system.
- Functional scalability: The ability to enhance the system by adding new functionality at minimal effort.
- Geographic scalability: The ability to maintain performance, usefulness, or usability regardless of expansion from concentration in a local area to a more distributed geographic pattern.
- Load scalability: The ability for a distributed system to easily expand and contract its resource pool to accommodate heavier or lighter loads or number of inputs. Alternatively, the ease with which a system or component can be modified, added, or removed, to accommodate changing load.
- Generation scalability: The ability of a system to scale up by using new generations of components. Thereby, heterogeneous scalability is the ability to use the components from different vendors.
Scale-Out/In / Horizontal Scaling:
To scale horizontally (or scale out/in) means to add more nodes to (or remove nodes from) a system, such as adding a new computer to a distributed software application.
- Load is distributed to multiple servers
- Even if one server goes down, there are servers to handle the requests or load.
- You can add up more servers or reduce depending on the usage patterns or load.
- Perfect for highly available web application or batch processing operations.
- You would need additional hardware /servers to support. This would increase increase infrastructure and maintenance costs.
- You would need to purchase additional licenses for OS or required licensed software’s.
To scale vertically (or scale up/down) means to add resources to (or remove resources from) a single node in a system, typically involving the addition of CPUs or memory to a single computer.
- Possibility to increase CPU/RAM/Storage virtually or physically.
- Single system can serve all your data/work processing needs with additional hardware upgrade being done.
- Minimal cost for upgrade
- When you are physically or virtually maxed out with limit, you do not have any other options.
- A crash could cause outages to your business processing jobs.
We discussed in detail about the both approach in Scalability, depending on the need you will have to choose right approach. Nowadays high availability of cloud computing platforms like Amazon AWS/Microsoft Azure etc., you have lots of flexible ways to Scale-Out or Scale-Up on a Cloud environment, which provides you with virtually unlimited resources, provided you are being capable to pay off accordingly.
Hope this information was helpful, please leave your comments accordingly if you find any discrepancies or you have any queries.
August 13, 2016
.NET, ASP.NET, Azure, Cloud Computing, Data Caching, Data Hubs, Emerging Technologies, KnowledgeBase, Microsoft, Performance, Redis Cache, Windows Azure Development
Azure Redis Cache, a secure data cache based on Open source Redis Cache, which will provide you a fully managed/serviced instance from Microsoft. Means you don’t have to bear the burden of managing the server/software patches etc..
What is Redis Cache?
Redis is an open source (BSD licensed), in-memory data structure store, used as a database, cache and message broker. It supports data structures such as strings, hashes, lists, sets, sorted sets with range queries, bitmaps, hyperloglogs and geospatial indexes with radius queries. Redis has built-in replication, Lua scripting, LRU eviction, transactions and different levels of on-disk persistence, and provides high availability via Redis Sentinel and automatic partitioning with Redis Cluster.
You can run atomic operations on these types, like appending to a string; incrementing the value in a hash; pushing an element to a list; computing set intersection, union and difference; or getting the member with highest ranking in a sorted set.
In order to achieve its outstanding performance, Redis works with an in-memory dataset. Depending on your use case, you can persist it either by dumping the dataset to disk every once in a while, or by appending each command to a log. Persistence can be optionally disabled, if you just need a feature-rich, networked, in-memory cache.
Redis also supports trivial-to-setup master-slave asynchronous replication, with very fast non-blocking first synchronization, auto-reconnection with partial resynchronization on net split.
5 High-level Use Cases of Redis Cache
1. Session Cache
One of the most apparent use cases for Redis is using it as a session cache. The advantages of using Redis over other session stores, such as Memcached, is that Redis offers persistence. You can maintain your applications user, role and authorization permission lists etc in Redis Cache for faster accessibility.
2. Full Page Cache (FPC)
Outside of your basic session tokens, Redis provides a very easy FPC platform to operate in. Going back to consistency, even across restarts of Redis instances, with disk persistence your users won’t see a decrease in speed for their page loads
Taking advantage of Redis’ in memory storage engine to do list and set operations makes it an amazing platform to use for a message queue. Interacting with Redis as a queue should feel native to anyone used to using push/pop operations with lists in programming languages such as C#, Python, Java, Php etc.
Redis does an amazing job at increments and decrements since it’s in-memory. Sets and sorted sets also make our lives easier when trying to do these kinds of operations, and Redis just so happens to offer both of these data structures.
The use cases for Pub/Sub are truly boundless. You can use it for social network connections, for triggering scripts based on Pub/Sub events, and even a chat system built using Redis Pub/Sub!
Finally let us come to context of this blog to take you to essential pricing model from Microsoft:
Azure Redis Cache is available in three tiers:
- Basic—Single node, multiple sizes, ideal for development/test and non-critical workloads. The Basic tier has no SLA.
- Standard—A replicated cache in a two-node primary/secondary configuration managed by Microsoft, with a high-availability SLA.
- Premium—All of the Standard tier features, including a high-availability SLA, as well as better performance over Basic and Standard-tier caches, bigger workloads, disaster recovery, redis persistence, redis cluster, enhanced security and isolation through Virtual Network Deployment.
- ** Basic and Standard caches are available in sizes up to 53 GB(250 MB, 1 GB, 2.8 GB, 6 GB, 13 GB, 26 GB, 53 GB. )
- ** Premium caches are available in sizes up to 530 GB with more on request.
July 10, 2015
Hyper-V, KnowledgeBase, Microsoft, OS Virtualization, Tips & Tricks, Virtual Machines, Virtualization, VMware, Windows, Windows 10
ESX, Fusion, Hyper-V, VMware, VMWare Workstation
As a Windows 10 Insider, I would always latest version of Windows on VMWare Player, Workstation or VirtualBox. Recently I was trying to set up a Windows Phone 10/UWP development environment inside a VMWare virtual machine.
I tried to enable Hyper-V platform components in my Windows 10 Preview Virtual machine. It shows an error.
Hyper-V cannot be installed: A hypervisor is already running
- Unable to use Hyper-V platform inside a Windows 10 virtual machine
- When trying to enable/install Hyper-V in a Windows 10 virtual machine, you will see the above error:
Solution for this problem is to edit your VMware Virtual Machine configuration(.vmx) file in your Windows 10 Virtual machine stored location.
- Switch off/Shutdown your VMware virtual machine
- Edit the corresponding .vmx file
- Append the following entries to the vmx file (verify entry if already exists)
hypervisor.cpuid.v0 = "FALSE”
vhv.enable = "TRUE"
mce.enable = "TRUE"
- Save the changes
- Start your Windows 8 VMware Virtual machine
- Now go to Control Panel –> ‘Programs and Features’ –> Turn windows features on or off
- Viola!, You can now enable ‘Hyper-V Platform’ . Now you can install Windows Phone SDK on your Windows 10 Virtual Machine
VMware Official Knowledgebase Reference Link:
Hope that help you guys with similar problems.
May 20, 2015
Azure, Azure Stack, Cloud Computing, Cloud Services, IaaS, Microsoft, PaaS, SaaS, Storage, Backup & Recovery, Virtual Machines, Windows Azure Development, Windowz Azure
Typically it is a hype among people that if a product comes from Microsoft, it needs to be criticized and thinking Microsoft would only be promoting their products with Azure. That’s not right and I would say we are being judgmental without even looking at the capabilities on Microsoft Azure.
Microsoft Azure the prime competitor to Amazon AWS offerings, has improved a lot and focus is more on providing cloud computing capabilities available for everyone.
What is Microsoft Azure Stack?
Simply it paves a way to engaging your data-center with On-premise – Private/Hybrid cloud computing capabilities.
Azure Stack is a combined solution consisting of Windows Server 2016, Azure Pack and Azure Service Fabric.
- Azure Pack is an arrangement of Azure features that Microsoft makes accessible to its facilitating hosting service providers and bigger business clients as a download that can keep running on top of Windows Server and Systems Center.
- Service Fabric is the new infrastructure management technology Microsoft is integrating in to Azure and Windows Server that will empower applications to be developer, deployed and oversaw in micro-service structure.
- Windows Server 2016 is an upcoming server operating system developed by Microsoft as part of the Windows NT family of operating systems, developed concurrently with Windows 10 expected to be released in Q3, 2016 and is currently under Technical Preview.
Here’s a simplified view of the Azure Stack Architecture(Sourced from Microsoft Azure Stack Documentation):