sábado, 4 de octubre de 2014

Firefly perimeter cluster

This is the first really cool thing. Up until now, to play with SRX you would either have to buy a couple of physical devices, which are usually quite expensive for regular home users (the cheapest I've seen is 200€ for an 8 port SRX 100), or go for GNS3 and deal with it's limitations, which are less and less everytime I try it, to be honest.

With firefly perimeter you can virtualize an SRX appliance and try if for 60 days at home. It comes with most of the features a mid-tier SRX has: you can do VPNs, clusters, advanced routing, IDP and UTM, etc, so it fits great with my goals as in top of that I can have all that virtualized without needing a phisical appliance.

You can download the OVA file from the juniper site, the one that I've used comes with JunOS v12.1. I tried to deploy it with ESXi 5.5, but for some reason I couldn't make the cluster work, so I installed ESXi 5.1 instead and everything went just fine this time. I'll try to explain the process step by step below.

Preparing ESXi for the cluster.


Firefly perimeter uses three interfaces to build up the cluster. As noted on juniper site, these interfaces are:

  • ge-0/0/0 is the Out-of-band Management Interface (fxp0).
  • ge-0/0/1 is the Cluster Control-Link (fxp1).
  • ge-0/0/2 is the Cluster Fabric-Link (fab0 on node0, fab1 on node1).
The interface mapping between the ESXi VM and the firefly is trivial, the network adapter 1 in the vm configuration correponds with the interface ge-0/0/0, the network adapter 2 is the interface ge-0/0/1, and so on.

Each of these interfaces reserved for redundancy on each node has to be in the same vSwitch than the equivalent in the other node, and the vSwitches  for control and fabric links must have an increased MTU. It is because traversing from one node to another throgh the fabric link will get encapsulated, hence setting the MTU will allow large packets arriving to a regular reth or normal interface to pass from one node to antothe with an extra header.

To create it, go to the configuration tab of ESXi server in the vSphere client, then select Networking and click in the Add networkin.. link. In the wizard that pops up, select a Virtual Machine connection type, create a vSphere standard switch and change the network label to something meaningful, like HA-Ctrl-CID-1 for the control links vSwitch and HA-Fab-CID-1 for the fabric links. A warning about no phisical NIC attached to the switch will pop-up, but we do not need it so don't pay attention to it.

Now the new vSwitch will show up in the list of switches of the ESXi, but we still need to change the MTU. To do so, click in the properties link of the newly created vSwitch. Click in the Edit button and change the MTU value to 9000. Now the vSwitch properties will look like this:



Deploy the Firefly perimeter OVA

First of all we obiously need to download the OVA file from the juniper site. Once we have it, we just need to select Deploy OVA file in our ESXi, accept the terms of the license and go ahead with the regular settings, just changing the name of the VM, and doing it twice, one for each node of the cluster. I've selected FF1 and FF2 as names.





By default, the VM comes with two interfaces, both connected to the default vSwitch. As we have seen we'll need at least 3 interfaces for the cluster, plus as many interfaces we want, up to ten in total.

I added three interfaces to each node, three will be used as mentioned before, one for a "front-end" and one for a "back-end". I have create dedicated vSwitches for each one, so the hardware properties of each node looks like this:



Configuring cluster


 Now we have everything ready to power on the VMs and configure them. The default config is as usual in a junos device, so we log in with the root account, which doesn't need any password until we configute one, so the first thing we'll do is configure the root and admin account's pass, as well as enable the ssh service:

#set system root-authentication plain-text-password
New password:
Retype new password: 

#set system login user admin  class super-user
plain-text-password
New password:
Retype new password:

#set system services ssh 

Next thing to configure is the fxp interfaces address so each node can reach the other to build up the cluster, and also each node host name. I have used IP addresses within the same subnet as the PC that I use to connect to the ESXi, as if you recall, the vSwitch connected to the ge-0/0/0 is connected to the ESXi physical NIC:

#set groups node0 interfaces fxp0 unit 0 family inet address 192.168.1.200/24
#set groups node 0 system hostname FF1
#set groups node1 interfaces fxp0 unit 0 family inet address 192.168.1.201/24
#set groups node 0 system hostname FF1
#set apply-groups "${node}"



We have to configure the fabric link too, for the redundancy to work properly:



#set interfaces fab0 fabric-options member-interfaces ge-0/0/2
#set interfaces fab1 fabric-options member-interfaces ge-7/0/2 

We  also need to remove all the configuration that comes by default regarding the interface ge-0/0/0, which by default falls in the untrust security zone and has the management services enabled:

#delete system services web-management http interface ge-0/0/0.0
#delete security zones security-zone untrust interfaces ge-0/0/0.0
#delete interfaces ge-0/0/0

Now the initial setup is ready so we can commit-and quit

#commit and-quit

We are now ready to configure the cluster. In the first node:

> set chassis cluster cluster-id 2 node 0 reboot

And in the second node:

> set chassis cluster cluster-id 2 node 1 reboot

The cluster-id can be any number from 1 to 255. I choosed 2 because I already have a cluster with id 1.

Now both VM’s will get rebooted and if everything goes fine, they will form a cluster when finish booting up, and each node will be reachable through ssh on the IP address that we configured, both in node 0, which is primary:


And in node1:



Configuring redundant groups and interfaces

The last step will be to set up the redundancy groups and place the redundant interfaces within. I’m only setting up the redundancy-group 0, used for the routing engine failover process, and redundancy-group 1, where I will place the two redundant interfaces that I’ll set up.

First, we define the redundancy groups 0 and 1, with a higher priority in node 0 so it becomes primary when available:

# set chassis cluster redundancy-group 0 node 0 priority 100

# set chassis cluster redundancy-group 0 node 1 priority 1

# set chassis cluster redundancy-group 1 node 0 priority 100
# set chassis cluster redundancy-group 1 node 1 priority 1

Now we set the maximum number of redundant interfaces to 7, as there can only be 7 regular interfaces in the firefly VM. Then create the redundant interfaces, and assign them to a security zone created ad-hoc. I will use the names FE and BE for the security zones, and allow all traffic from FE to reach BE, just for testing purposes:

#set interfaces ge-0/0/3 gigether-options redundant-parent reth3
#set interfaces ge-7/0/3 gigether-options redundant-parent reth3
#set interfaces reth3 redundant-ether-options redundancy-group 1
#set interfaces reth3 unit 0 family inet address 10.100.0.254/24

#set interfaces ge-0/0/4 gigether-options redundant-parent reth4
#set interfaces ge-7/0/4 gigether-options redundant-parent reth4
#set interfaces reth4 redundant-ether-options redundancy-group 1
#set interfaces reth4 unit 0 family inet address 10.100.100.254/24

#set security zones security-zone BE interfaces reth3.0
#set security zones security-zone FE interfaces reth4.0

#set security policies from-zone FE to-zone BE policy ALLOW_ALL match source-address any
#set security policies from-zone FE to-zone BE policy ALLOW_ALL match destination-address any
#set security policies from-zone FE to-zone BE policy ALLOW_ALL match application any
#set security policies from-zone FE to-zone BE policy ALLOW_ALL then permit

Now the chassis status looks like this:
 


Everything looks good! I tried a continous ping to from a VM connected to the FE vSwitch to anothe BE vSwitch, then shutted down node0: not a single ping was lost, node1 took over automatically and it was not even noticed.

For next entries I'll try a more complex setup, but first I'd like to give IDP a try, we'll see if it works as smooth as this did

References:

http://www.juniper.net/techpubs/en_US/firefly12.1x47/topics/task/multi-task/security-virtual-perimeter-cluster-stage-provisioning-vmware.html
http://www.juniper.net/techpubs/en_US/firefly12.1x46-d10/topics/task/multi-task/security-virtual-perimeter-chassis-cluster-configuring.html

domingo, 28 de septiembre de 2014

Installing ESXi

So, after a week marked by the shellshock vulneravility, that kept quite busy for the last few days, I finally had some time to get back to my small lab project.

The TS140 and the memory module arrived right on time, and I had a 750GB old hard drive around that I was not using since quite some time (we will see later why), so I put my hands on job. Getting the components on its place was quite easy, you don't even need to use a screwdriver a single time, so the phisycal part was quick to get done:




Probelms started on the software install, though. I decided to run the esxi on a usb since I read that it was good option, and didn't have much impact on performance. Having a NAS with 4TB I thought it would be nice to have datastores via NFS, but to get it started I planned to go with the 750GB drive. To format a bootable USB drive a found the useful tool rufus, which I used to format a 4GB (using a bigger one seems to be just a waste of resources) USB drive to boot with the ESXi 5.1 image downloaded from the vmware site. I was going to use ESXi 5.5, but as I read about certain problems with firefly perimeter clustering, I decided to go for 5.1 instead.

Right after booting up, first issue: menu.c32: not a COM32R image. Apparently it is quite common and is due to vmware using and old file on the bootable iso they provide. Actually rufus already warned me while formatting the drive, so it was easy to fix.

Second attempt to boot up, second problem: the NIC is not supported. I read about it while looking for a server, but didn't thought it would pop up that soon. Anyway after a bit of research in google I found the tool ESXi customizer. It allows you to create a custom bootable iso image where you can include drivers that do not come in the standard image, or, like in my case, replace the standard one with a custom one found thanks to the user mapd07 in a vmware community post.

After that I thought everything would go smooth, but ther was still another suprise waiting for me. It was taking too long for the installer to boot up, which at first I thought was due to the "cheap server", but when it took so long I changed the tty and saw a cascade of errors related to the hard drive. Apparently it was broken and is not just that it was unable to mount it (it was not that moment, yet), it was unable even to access the device.  So I just powered off the server again, removed the hard drive and went ahead installing the ESXi in the usb itself.

The rest of the install process was quick, and the screen was now showing that I should be able to control the vSphere  through a client in my lan, showing the address of the server, so I went back to my laptop and, indeed, the vSphere was accessible through the vSphere client. It was giving a warning about it being licensed only for trial during 60 days, so I went back to the vmware download site, grabbed my license key and put it back in the ESXi. Now it is limited in capabilities, but not in time:



Later I borrowed a 1TB HD that I'm using for datastore, I'll explain in further posts what I have done so far, including firefly perimeter, junos space and LTM VE. Being a noob didn't help me, and I guess I did all the mistakes that could be done, but so far I have my own ESXi running at home, which is quite cool!

lunes, 22 de septiembre de 2014

New virtual networking lab!

So, long story short, I wanted to be able to play with devices such as vSwitches, firewalls, F5 LTM, etc, and over the last few years it has become more and more usual to get them as virtual machines.

My skills in virtualization and as a sysadmin are very limited, so I thougt it would also be a good idea to build my own home lab, with an esxi where I'd deploy VMs such as junos frefly, LTM VE, etc. That way hopefully I'll also learn something about virtualization with ESXi.

So far I have already bought a lenovo TS140 from amazon + an 8GB RAM module from alternate. It was not easy to find a memory, mainly because ECC + unbuffered is not cheap at all, so I only got 1 module, and probaly will get another one soon.

The other options I've checked were the HP microSrv Gen8 and the dell PowerEdge T20, but I've chosen the Lenovo mainly beacuse it is easily expandable and I've already found that what I want to achieve seems to be possible

I already have an idea of what the lab should be, and what I would like to have is:

 - vmware ESXi   
 - F5 LTM VE
 - Junos Firefly
 - Junos Space

All this products come as VMs with a trial license which terms I'll have to read carefully, just to be sure.

It should allow me to play with automation, clustering, IRB and other conecpts in a non prod enviroment. Also at some point I would like to change the standard vswitch with some sort of SDN capable vswitch that I can use to practice SDN principles, and depending on demo licensing availiability, maybe even security/IDP.

I'll try to keep a track of what I do step by step, I still can't believe I'll be able to do all this in my own house, with equipment worth less than 500€. Performance might be cr*ppy but it still amazes me that I'll be able to have all this!