MS SQL Database mirrored database stuck in “recovery pending” state

Today, I had to troubleshoot a situation where, following a myriad of improbable events:

  • A MS SQL mirrored database was in “recovery pending” state on one server.
  • We had to bring it back online on the server.

I initially tried to force the database back online by issuing:

ALTER DATABASE <db_name> SET PARTNER OFF;

However, I’ve got as a reply the message”Database cannot be opened due to inaccessible files or insufficient memory or disk space. See the SQL Server error log for details”. Of course, permissions were fine and there was plenty of free space.

The solution was to:

  1. Move the database and log files (.mdf, .ldf) to another location.
  2. Drop the database.
  3. Move the database and backup files back to their initial location.
  4. Re-attach the database.
  5. Reconfigure SQL Database Mirroring.

Back online!

Introduction au «DevSecOps»

Origine du DevOps et du DevSecOps

Le DevOps est une philosophie née suite aux mouvements Agile et Lean des années 2000-2005. Ces mouvements avaient pour objectifs de rendre le développement plus flexible et plus rapide, tout en bâtissant des applications de qualité, centrées sur les besoins des utilisateurs. L’application de ces méthodes a augmenté la capacité de livraison des équipes de développement et a changé la planification de leurs besoins en infrastructure. Cependant, les processus opérationnels n’étaient pas adaptés à cette nouvelle réalité : cela a contribué à créer des délais et des frictions entre les équipes de développement et d’opérations TI. À la source de cette dichotomie, notons que la raison d’être des équipes de développement est de mettre de l’avant des nouvelles fonctionnalités alors que celle des opérations est d’assurer la disponibilité et le rendement des services et applications.

Continue reading “Introduction au «DevSecOps»”

How to setup a MS SQL server for non-DBA systems administrators

It’s a fact that most MS SQL servers running in organizations are not built by DBAs. Small, medium and even larger organizations don’t hire professional DBAs. Even in environments where there are DBAs, it will happen quite often that databases servers were built by sysadmins for various reasons.

Microsoft did a great job as it’s easy to setup a SQL server, probably too easy and it will run “fine” without too much effort… for how long? And it could probably run faster with less pressure on your infrastructure.

This is the first of a series of articles dedicated to all my sysadmins friends who are building MS SQL servers to the best of their knowledge.

Continue reading “How to setup a MS SQL server for non-DBA systems administrators”

Does Nutanix give up its roots?

Over the last few months I’ve been observing a lot of changes in Nutanix market strategy. While it’s probably for the greater good, I feel forgotten as a customer who chose their hyperconverged solution for technological reasons, but also to align with my employer financial model preferences.

First, I must admit that I’m biased towards Nutanix: I’ve been following them since 2013 and when I got an opportunity, I became a customer in 2016. At this time, we had been migrating our workloads to the cloud for few years and we were struck by a reality check: cloud was becoming expensive and was heavy in the OPEX vs CAPEX balance. Reviewing multiple scenarios, including hyperconverged players, I built a business case to return to traditional datacenters using Nutanix solution. The ROI was less than a year. The project was approved and moved forward. We delivered. Successfully.

We also got other benefits from that decision: we reduced more expenses by decommissioning remaining VMWare servers and going to AHV (which is also our technological preference as it’s KVM under the hood). One of our most important success (for us, IT people): we kept all of the agility we became so fond of by using AWS and Azure for few years.

Deploying Nutanix clusters takes merely few hours of work. As soon as we are done, we are able to instantiate workloads in the same way we would do it in the cloud. No need to think about storage, fiber fabrics, balancing loads, etc. We are also able to integrate our deployment scripts and monitoring solutions with Nutanix APIs. Keeping the environment up to date is a breeze (literally few clicks), especially if we think compared to traditional infrastructure (servers firmware, SANs, controllers, all the individual hard drives firmware craziness, fabrics, etc). Nutanix is a wonderful product.

The only missing part for me at this point was on the networking side: where were my security groups or Azure NSG in that hyperconverged awesomeness? As this was greatly improving segregation and overall security posture, it was lacking in the Nutanix solution. But, as the vendors were saying, it was on the roadmap…

After many acquisitions (PernixData, Calm.io and now other), after going public in September 2016, Nutanix continues to develop and evolve. They have been aggressively developing and adding innovative features like AFS, ABS, ACS and Calm. Sooner this year, they came up with multiple exciting products:

  • Flow: The so eagerly awaited solution for app-centric network migrosegmentation.
  • Beam: Multi-cloud cost optimization & compliance management service.
  • Era: Automated and simplified database management. This will be launched later in 2018.
  • Xi: Bridging the gab between public cloud services and on-premises Nutanix deployments.

To achieve their mission of making infrastructure invisible and elevating IT to focus on applications and services powering their business, Nutanix is moving forward as a software company, not as an IT Infrastructure company. Doing so, and presumably to satisfy their shareholders since they are now under public scrutiny, Nutanix is following their counterpart models: Microsoft, Google and so many more, in the “as a service” path. Most (if not all), of Nutanix recent products (Calm, Flow and others) will be delivered only in a subscription – pay-as-you-go – model.

While it totally makes sense for some services like Beam and Xi, I must admit that I would have preferred other options for Calm and Flow. It’s also difficult to forget vendors speeches not that far ago telling that Nutanix is always evolving their solution and that by buying the “Ultimate” package, I would get all the bells & whistles that will be available in the future. They were even adding that Nutanix was not following its counterparts (read VMWare) by selling their product under many editions with various features and multiple options. It’s clear that things changed in San Jose…

Not only do we have to pay extra money to extend our Nutanix capabilities with Flow and Calm, but we need to do so in OPEX expenses instead of CAPEX expenditure, which is even more disturbing for me. The CAPEX model was one of the main drivers in my business case to move from the cloud to Nutanix. In other words, at my actual company, by delivering new products under an OPEX model, Calm and Flow are not an option.

Don’t take me wrong, it’s great that Nutanix is extending their solutions to multiple hardware vendors and to subscription models: it will allow them to reach many potential clients who could have been blocked by the required CAPEX investment. My concern with this is what will happen to people like me, who work for companies where performance is evaluated based on EBITDA and who decided to leverage Nutanix in-house solution to limit OPEX expenses? Will they keep their CAPEX based model? For how long? What will be the limitations of these perpetual licenses and deployments?

While Nutanix is one of my favorite platforms, I’m having a hard time to figure out ways I could leverage a Nutanix deployment on a subscription basis. One of the few possibilities I may think to justify this is for the companies, who prefers OPEX and who have restrictions and cannot host their data in a public cloud provider datacenter. Going with a Nutanix subscription would allow them to have their in-house cloud. Otherwise, in a company that prefers OPEX and without data location constraints, I would probably architect around a public cloud solution, leaving behind any hardware and datacenter management while probably spending less money.

Overall, this move of Nutanix is really interesting as they are entering an interesting market. There is at least one major player in the same market: Microsoft. Azure Stack solution might be younger as an on-premises cloud solution and it might have a lot of limitations when compared to its public cloud counterpart. However, Azure platform is rich in features and Microsoft roadmap is aggressive, we know that they can deliver. I would literally love to evaluate Azure Stack against Nutanix to see how they compare in terms of features, scalability, stability and performance.

For now, I will evaluate alternatives to Flow like Illumio (maybe cheaper as license seems more flexible and less vendor lock-in) and I will probably continue to use Ansible as a configuration management solution instead of jumping in the Calm bandwagon.

I’m back

After more than two years of being offline, this blog is back.

I originally started it in 2006, when I was a DBA; at this time, I was talking about MS SQL and also about Open Source subjects. As my career was evolving, I moved to IT security and management responsibilities, my subjects of discussion followed the same path.

At the beginning of 2016, seeing that I didn’t posted anything in the last year, a result of some work life balance challenges: a young family at home and so many hours spent at work, I decided to shutdown my old WordPress blog.

I decided to bring this site back online as I’m still in love with technology and wanted to share about the various projects I’m working on everyday:

  • Open Source
  • IT Security
  • Devops
  • Cloud (Office 365, AWS, Azure, etc.)
  • Hyperconvergence
  • Containers
  • Management (maybe) 🙂

I hope you will enjoy to read me.

Gartner Catalyst Recap

This month, from August 11th to August 14th, I had the opportunity to attend Gartner Catalyst conference held in Manchester Grand Hyatt, San Diego.  Catalyst is Gartner conference for technical professionals, this year’s theme was “Architecting the Digital Business: Scaling and Securing Mobility, Cloud and Data”.  This year’s topics were: Securing public cloud, Making big data real, BYOD do’s and don’ts, Cloud deployment models and Protecting mobile data.

There was seven tracks to choose sessions from:

A. Architecting Mobility to Drive Business Innovation
B. Information: The Lifeblood of the Digital Enterprise
C. Architecting Cloud Services for a Scalable IT Foundation
D. Protecting Your Business From Global Cyber Risks
E. Maximizing Employee Productivity in a Mobile- and Cloud-Driven World
F. Software-Defined Data Center: The Blueprint for the Agile Infrastructure
G. Driving Innovation in the New Era of Software Development

Even if the tracks B (Data) and D (Security)are subjects that always fascinated me and around which my career has been focused at some point, I decided to attend mostly sessions of tracks C and E as they were more consistent with the researches and projects we have conducted in the last years and with the strategy we are taking to meet business orientation and needs.

I enjoyed most of the sessions I have attended: the content was interesting and satisfying.  However, as interesting as sessions were, I was left in appetite: I had the feeling that I didn’t listen to anything really new or revolutionary, but it’s a personal opinion.  After all, Gartner must satisfy the different tastes of hundreds – or was it thousands – of visitors.  I personally preferred the most technical sessions or the one with tricks or observations and real facts.  To name a few: the presentation from Angelina Troy “Clash of the New Storage Giants: Amazon, Azure, and Google Comparison”, Eric Maiwald “Network Security for Private and Hybrid Clouds”, Simon Richard “Hybrid Cloud Network Connections: The Missing Link”, were some of my favorites.  Presenters were great and the content has gone beyond my expectations.  They presented real numbers, detailed comparisons of different cloud offerings, explained different solutions for some common problems we will face with some cloud implementation.

Apart from the different sessions, Catalyst is also a place to attend some workshops on different topics,listen to end-user case studies, meet analyst one-on-one to discuss any subject you wanted much more.  To be honest, one of the biggest challenge I had is that there was always so much things going on, I would have needed to duplicate myself 3 or 4 times.  Lunch time and happy hour was a good moment for sharing and for discussions with peers about various challenges we face with technology in our enterprises and industries. It was a good opportunity to get some feedback and suggestions from people in the field like us.

Finally, my favorite part of Catalyst were definitely the Guest Keynote sessions with Bill Nye and Tom Wujec.  They were entertaining, refreshing interesting and relevant to their audience.

“One test is worth a thousand experts opinions – Tex Johnston”  after doing barrel-rolling a Boeing 707 prototype. 

I really liked the quote from Tex Johnston, and I found that it applied very well to the world of Information Technology!  I’m sure I will reuse it someday!

The presentation of Tom Wujec about “Visualizing Business strategies” was fascinating and thought–provoking, he gave insight of visualization techniques we could use in enterprise to stimulate creativity and attain better productivity while lowering costs.

Now, will I return to Catalyst?  Certainly!

KVM – Resize guest LVM disk

#
## Logon to KVM host
#

#
## Resize guest volume:
#

# If LVM guest volume:
lvextend +50G /dev/vg_kvm/ubuntu

# If qemu guest volume
qemu-img resize ubuntu.qcow2 +50G

#
## Refresh storage pool :
#

virsh pool-refresh vg_kvm

#
## Logon to KVM guest
#

#
## Check actual disk layout:
#
root@ubuntu:/# fdisk -l

Disk /dev/vda: 107.4 GB, 107374182400 bytes
16 heads, 63 sectors/track, 208050 cylinders, total 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00043482

Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *        2048      499711      248832   83  Linux
/dev/vda2          501758   209713151   104605697    5  Extended
/dev/vda5          501760   209713151   104605696   8e  Linux LVM

Disk /dev/mapper/ubuntu-root: 98.6 GB, 98612281344 bytes
255 heads, 63 sectors/track, 11988 cylinders, total 192602112 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/ubuntu-root doesn’t contain a valid partition table

Disk /dev/mapper/ubuntu-swap_1: 8480 MB, 8480882688 bytes
255 heads, 63 sectors/track, 1031 cylinders, total 16564224 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/ubuntu-swap_1 doesn’t contain a valid partition table

root@ubuntu:~# fdisk /dev/vda

Command (m for help): p

Disk /dev/vda: 161.1 GB, 161061273600 bytes
16 heads, 63 sectors/track, 312076 cylinders, total 314572800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00043482

Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *        2048      499711      248832   83  Linux
/dev/vda2          501758   209713151   104605697    5  Extended
/dev/vda5          501760   209713151   104605696   8e  Linux LVM

Command (m for help): d
Partition number (1-5): 5

Command (m for help): n

root@ubuntu:~# fdisk /dev/vda

Command (m for help): p

Disk /dev/vda: 161.1 GB, 161061273600 bytes
16 heads, 63 sectors/track, 312076 cylinders, total 314572800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00043482

Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *        2048      499711      248832   83  Linux
/dev/vda2          501758   209713151   104605697    5  Extended
/dev/vda5          501760   209713151   104605696   8e  Linux LVM

#
## Edit disk layout, delete LVM and Extended partition
#

Command (m for help): d
Partition number (1-5): 5

Command (m for help): d
Partition number (1-5): 2

#
## Recreate LVM partition
#

Command (m for help): n
Partition type:
p   primary (1 primary, 0 extended, 3 free)
e   extended
Select (default p): p
Partition number (1-4, default 2):
Using default value 2
First sector (499712-314572799, default 499712): 501760
Last sector, +sectors or +size{K,M,G} (501760-314572799, default 314572799):
Using default value 314572799

Command (m for help): p

Disk /dev/vda: 161.1 GB, 161061273600 bytes
16 heads, 63 sectors/track, 312076 cylinders, total 314572800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00043482

Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *        2048      499711      248832   83  Linux
/dev/vda2          501760   314572799   157035520   83  Linux

Command (m for help): t
Partition number (1-4): 2
Hex code (type L to list codes): 8e
Changed system type of partition 2 to 8e (Linux LVM)

Command (m for help): p

Disk /dev/vda: 161.1 GB, 161061273600 bytes
16 heads, 63 sectors/track, 312076 cylinders, total 314572800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00043482

Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *        2048      499711      248832   83  Linux
/dev/vda2          501760   314572799   157035520   8e  Linux LVM

#
## Reboot
#

reboot

#
## Resize PV
#

root@ubuntu:~# pvresize /dev/vda2
Physical volume “/dev/vda2” changed
1 physical volume(s) resized / 0 physical volume(s) not resized

#
## Extend LV
#

root@ubuntu:~# lvextend -L +30G /dev/ubuntu/root
Extending logical volume root to 121.84 GiB
Logical volume root successfully resized

#
## Resize filesysten
#

root@ubuntu:~# resize2fs /dev/ubuntu/root
resize2fs 1.42 (29-Nov-2011)
Filesystem at /dev/ubuntu/root is mounted on /; on-line resizing required
old_desc_blocks = 6, new_desc_blocks = 8
Performing an on-line resize of /dev/ubuntu/root to 31939584 (4k) blocks.
The filesystem on /dev/ubuntu/root is now 31939584 blocks long.

#
## Check free space
#

root@ubuntu:~# df -h
Filesystem                       Size  Used Avail Use% Mounted on
/dev/mapper/ubuntu-root     122G   76G   40G  66% /
udev                             3.9G  4.0K  3.9G   1% /dev
tmpfs                            1.6G  260K  1.6G   1% /run
none                             5.0M     0  5.0M   0% /run/lock
none                             3.9G     0  3.9G   0% /run/shm
/dev/vda1                        228M   49M  168M  23% /boot
smtlaqnap1.garda.ca:/GARDA/home   55T   14T   41T  25% /home
root@ubuntu:~#

LUN scan and resize operations with multipath on Ubuntu 12.04

Here are some quick tips to discover newly attached LUNs or to rescan resized LUNs:

Scan to discover a newly attached LUN:

  1. for i in `ls /sys/class/fc_host`; do echo 1 > /sys/class/fc_host/${i}/issue_lip; done

Rescan resized LUN:

  1. Find the path of your LUN:
    multipath -l
  2. Execute following command to rescan your drives. Replace “dm-42” by the path you previously found:
    for i in `ls /sys/block/dm-42/slaves/`; do echo 1 > /sys/block/${i}/device/rescan ; done
  3. Resize multipath device. Replace “dm-42” by the path you previously found:
    multipathd -k'resize map dm-42'
  4. Resize the filesystem according to your needs.

Install aircrack-ng on Ubuntu 12.04

It seems, aircrack-ng was removed from Ubuntu’s repositories.  Here is how to install it without any source compilation:

  1. download package:
    wget http://launchpadlibrarian.net/71861454/aircrack-ng_1.1-1.1build1_amd64.deb
    or
    wget http://launchpadlibrarian.net/71861174/aircrack-ng_1.1-1.1build1_i386.deb
  2. install package (you will receive error messages):
    sudo dpkg -i aircrack-ng_1.1-1.1build1_*.deb
  3. Install missing dependencies:
    sudo apt-get install -f