Tuesday, May 02, 2017

Red Hat launches openshift.io

Red Hat launched openshift.io at Red Hat Summit 2017.

Its a SaaS based development environment, that provides everything you need to get up and running and start producing code.  The marketing speak actually says is a "A free, end to end, cloud native development environment".

This obviously caused a lot of questions for developers - it's sometimes (well, 99.99%) difficult to join the dots between marketing and reality. This caused some questions on Hacker News, but luckily Tyler Jewell, CEO of Codeny, was on hand to provide more clarity - its worth a read.

Just to reiterate - what openshift.io is going to provide for developers is a really  convient  hosteddevelopment environment to start their journey to developing apps for the cloud.

1. A deployment environment for your code native apps: You can use GitHub as your src repo, but deploy onto openshift to run your apps, for free (within limits). Red Hat wants to give developers the opportunity to run reasonable applications at no cost, but they don't have bottomless pockets, so there has to be some reasonable constraints - obviously you can chip in if you need more space/cpu.

2. A continuous deployment pipeline, that will enable you to run any code changes through test, stage to run, (which uses fabric8 and openshift pipelines under the covers).

3. A web based IDE, based on Eclipse Che, so you can edit your code in situ, without leaving openshift.io. This is a great feature for development, and allows you to develop and test the application you are building. However, if you don't want to leave the comfort blanket of your favourite IDE running on your laptop, and just push code to GitHub for openshift.io to pick up and deploy, that's OK too.

4. Analytics built in (using fabric8 analytics): to identify security risks in dependencies you maybe using, and also identify other dependencies that might be a better fit (e.g. flag you're using a really old version of commons math).

5. Agile management, to allow you to plan, and track development items for your code. This is really useful for collaborative development.

The fabric8 ecosystem also provides lots of developer tooling and examples to help developers get started.

Fabric8 keeps on growing

The fabric8 project has been at the forefront on innovation to enable software professionals to develop and deploy faster to cloud native environments. With the announcement of openshift.io at Red Hat Summit, 2017, the eco system of fabric8 has expanded to incorporate all (except the IDE - which is based on Che) of the technologies that make up the openshift.io developer platform:

The fabric8 project will continue to innovate and increase its scope to ensure it becomes the best environment for developers to continue to accelerate from idea to production. The fabric8 platform consists of over a hundred repositories under the GitHub fabric8io organization - all built on top of the fabric8 platform itself, keeping all releases in sync, and allowing us to continually improve our delivery.

Saturday, September 24, 2016

Microservices Journey with Apache Camel

Moving anything towards a container based, Microservices architecture is pretty hard. Doing it while the technology has been evolving so quickly is a even harder, but creating a user experience that means Microservices is really easy to build is the hardest thing we’ve ever done — and we would love to tell you about it!
If you are in Atlanta or Minneapolis the first week of October, Red Hat is hosting A Microservices journey with Apache Camel, where James Strachan, Claus Ibsen, Christian Posta, James Rawling and myself will tell you about the expedition through containers, kubernetes/OpenShift, continuous delivery, API management and how we’ve made it all super easy to use with fabric8!

Thursday, November 06, 2014

Fabric8 version 2.0 released - Next Generation Platform for managing and deploying enterprise services anywhere

The Fabric8 open source project started 5 years ago, as a private project aimed at making large
deployments of Apache Camel, CXF, ActiveMQ etc easy to deploy and manage.

At the time of its inception, we looked at lots of existing open source solutions that we could leverage to provide the flexible framework that we knew our users would require. Unfortunately, at that time nothing was a good fit, so we rolled our own - with core concepts based around:

  • Centralised Control
  • runtime registry of services and containers
  • Managed Hybrid deployments from laptop, to open hybrid (e.g. OpenShift)

All services were deployed into a Apache Karaf runtime, which allowed for dynamic updates of running services. The modularisation using OSGi had some distinct advantages around the dynamic deployment of new services, and container service discovery, and a consistent way of administration. However, this also meant that Fabric8 was very much tied to the Karaf runtime, and forced anyone using Fabric8 and Camel to use OSGi too.

We are now entering a sea-change for immutable infrastructure, microservices and open standardisation around how this is done. Docker and Kubernetes are central to that change, and are being backed with big investments. Kubernetes in particular, being based on the insurmountable experience that google brings to clustering containers at scale, will drive standardisation across the way containers are deployed and managed. It would be irresponsible for Fabric8 not to embrace this change, but to do it in a way that makes it easy for Fabric8 1.x users to migrate. By taking this path, we are ensuring that Fabric8 users will be able to benefit from the rapidly growing ecosystem of vendors and projects that are providing applications and tooling around Docker, but also frees Fabric8 users to be able to move their deployments to any of the growing list of platforms that support Kubernetes.  However, we are also aware that there are many reasons users have to want to use a platform that is 100% Java - so we support that too!

The goal of Fabric8 v2 is to utilise open source, and open standards. To enable the same way of configuring and monitoring services as Fabric8 1.x, but to do it for any Java based service, on any operating system. We also want to future proof the way users work, which is way adopting Kubernetes is so important: you will be able to leverage this style of deployment anywhere.
Fabric8 v2 is already better tested, more nimble and more scalable than any previous version we've released, and as Fabric8 will also be adopted as a core service in OpenShift 3, it will hardened at large scale very quickly.

So some common questions:

Does this mean that Fabric8 no longer supports Karaf ?
No - Karaf is one of the many container options we support in Fabric8. You can still deploy your apps in the same way as Fabric8 v1, its just that Fabric8 v2 will scale so much better :).

Is ZooKeeper no longer supported ?
In Fabric8 v1 - ZooKeeper was used to implement the service registry. This is being replaced by Kubernetes. Fabric8 will still run with Zookeeper however, to enable cluster coordination, such as master-slave elections for messaging systems or databases.

I've invested a lot of effort in Fabric8 v1 - does all this get thrown away ?
Absolutely not. Its will be straightforward to migrate to Fabric8 v2.

When should I look to move off Fabric8 v1 ?
As soon as possible. There's a marked improvement in features, scalability and manageability.

We don't want to use Docker - can we still use Fabric8 v2?
Yes - Fabric8 v2 also has a pure Java implementation, where it can still run "java containers"

Our platforms don't support Go - does that preclude us from running Fabric8 v2 ?
No -  although Kubernetes relies on the Go programming language, we understand that won't be an option for some folks, which is why fabric8 has an optional Java implementation. That way you can still use the same framework and tooling, but it leaves open the option to simply change the implementation at a later date if you require the performance, application density and scalability that running Kubernetes on something like Red Hat's OpenShift  or Google's Cloud Platform can give you.

We are also extending the services that we supply with fabric8, from metric collection, alerting, auto-scaling, application performance monitoring and other goodies:

Over the next few weeks, the fabric8 community will be extending the quick starts to demonstrate how easy it is to run micro services, as well application containers in Fabric8. You can run Fabric8 on your laptop (using 100% Java if you wish), or your in-house bare metal (again running 100% Java if you wish) or to any PaaS running Kubernetes.

Friday, March 07, 2014

Fuse At the DevNation Conference!

The JBoss Fuse engineering team have sponsored and organised CamelOne for the last 3 years, but after CamelOne 2013, the opportunity came up to put all the effort into a new developer conference, sponsored by Red Hat called DevNation. This is the first time the event has been run, and its a great opportunity to learn about all aspects of development and deployment. CamelOne was focused on Apache projects used for integration, but that in itself is quiet limited, and as an integration developer, you have to be able know about so much more. DevNation is an opportunity to learn from like minded developers about all aspects of real world deployments, from Hadoop to elastic search, from best practices in DevOps or OSGi,  to getting an insight into Docker, Apache Spark, Elastic Search and so much more. DevNation has a lot of promise to be a great developer conference, with a broad scope that will be informative and fun. Its for this reason that the fuse team decided to focus our attention on DevNation this year, rather than CamelOne.

The traditional way of delivering applications is outdated. Many users are rolling out across hybridised environments, and the need to be insulated from all the different environments,  to have location independence and the ability to dynamically deploy, find and manage all your integration services is going to be the key theme for the Fuse tracks at DevNation - as well as all the usual tips, tricks and secret ninja (OK undocumented) stuff that we like to share with the attendees.

DevNation this year is being held in San Francisco, and will run from Sunday April 13 - 17. You can register here - and we really hope to see you there!

Tuesday, December 17, 2013

One Technology Trend for 2014: "The Internet Of Things"

I was reading some online articles and came across Technology Trends for 2014 - the number one being the 'Internet of Things' or IoT for short. This isn't exactly a new concept, - the promise of smart homes, where everything from intelligent lights to A.I. for washing machines that can be monitored remotely have been around for a while. And how couldn't resist the concept of a smart fridge that can stock itself? The term Internet of Things has been around for over decade, being firstly proposed by Kevin Ashton whilst Auto-ID Center at MIT, primarily driven by an interest in RFID, but the ideas and uses cases for an Internet of Everything has taken a while to mature.

The drivers behind the IoT have been several:  The demand for renewable energy means that smart grids have to be able to monitor and respond to demand in electricity generation in a more agile manner - allowing for bi-directional energy supply from small energy producers (potentially you and me) - requires smart metering and monitoring. There's the exponential growth of smart phones  - more people are always connected, and that trend will continue.

However, when the IoT was first envisaged all those years ago, there were some technology inhibitors:

1. The limitation of IPv4 in terms of the number of physical addresses that were available
2. The capacity of the internet for a fully connected IoT
3. The ability for mediators to scale to millions of concurrent connections
4. The ability to  store and analyse the data in a scalable way
5. The ability to analyse all the data to make sensible decisions in a timely manner.

Fast forward to today and we have most of these things from a technology perspective either solved (e.g. IPV6) or the pieces are available, and Red Hat is ideally placed to provide the whole solution for a scalable backend for the IoT - and to do it all on open source software.

Firstly, we need the ability to provide a standards based, horizontally scalable solution for handling connectivity to hundreds of thousands of concurrent connections. JBoss A-MQ is combining the best of Apache licensed middleware solutions from Apache ActiveMQ, QPid and HornetQ to form a highly scalable messaging solution that supports MQTT, AMQP,  WebSockets and STOMP.

The IoT will generate a lot of unstructured data, which needs to be correlated and analysed, and one of the leading NoSQL solutions for doing this is Hadoop. If you want Hadoop to scale and perform, then bester infrastructure to run it on is a combination of GlusterFS and OpenStack.

Getting real time data into Hadoop's HDFS can be problematic, but  JBoss Fuse already has some of the best solutions for doing just that.

Finally, if you want to use complex event processing, to make decisions based on the flow of data from your connected devices based on causality and temporal logic, then JBoss BRMS is the best open source solution on the market.

Red Hat  is going to be right at the centre of  IoT solutions in 2014.

Friday, September 06, 2013

Apache Camel Broker Component for ActiveMQ 5.9

Embedding Apache Camel inside the ActiveMQ broker provides great flexibility for extending the message broker with the integration power of Camel. Apache Camel routes also benefit in that you can avoid  the serialization and network costs of connecting to ActiveMQ remotely - if you use the activemq component.

One of the really great things about Apache ActiveMQ is that it works so well with Apache Camel.

If however, you want to change the behaviour of messages flowing through the ActiveMQ message broker itself you will be limited to the shipped set of ActiveMQ Broker Interceptors - or develop your own Broker plugin - and then introduce that as a jar on to the class path for the ActiveMQ broker.

What would be really useful though, is to combine the Interceptors and Camel together - making it easier to configure Broker Interceptors using Camel routes - and that's exactly what we have done for upcoming ActiveMQ 5.9 release with the broker Camel Component. You can include a camel.xml file into your ActiveMQ broker config -  and then if you want to take all messages sent to a Queue and publish them to a Topic, changing their priority along the way - you can do something like this:

A few things worth noting:

  • A broker component only adds an intercept into the broker if its started - so the broker component will not add any overhead to the running broker until its used - and then the overhead will be trivial.
  • You intercept messages using the broker component when they have been received by the broker - but before they are processed (persisted or routed to a destination).
  • The in message on the CamelExchange is a Camel Message, but also a JMS Message (messages routed through ActiveMQ from Stomp/MQTT/AMQP etc. are always translated into JMS messages).
  • You can use wildcards on a destination to intercept messages from destinations matching the wildcard.
  • After the intercept, you have to explicitly send the message back to the broker component - this allows you to either drop select messages (by not sending) - or, like in the above case - re-route the message to a different destination.
  • There is one deliberate caveat though,  you can only send messages to a broker component that have been intercepted - i.e.  routing a Camel message from another Component (e.g. File) would result in an error.
There are some extra classes that have been added to the activemq-broker package - to enable views of the running broker without using JMX - and to support the use of the broker component:
org.apache.activemq.broker.view.MessageBrokerView - which provides methods to retrieve statistics on a the broker, and from the MessageBrokerView - you can retrieve a org.apache.activemq.broker.view.BrokerDestinationView for a particular destination. This means you can add flexible routing inside the broker by doing something  like the following - to route messages when a destination's queue depth reaches a certain limit:

This is using the Camel Message Router pattern - note the use of Spring expression language (spel) in the when clause.

Wednesday, August 21, 2013

Fuse days are back

One thing I get constantly asked about is the Fuse days that FuseSource used to do around Europe and the US, and now FuseSource is part of Red Hat would they still be happening? Well the answer is an emphatic YES! After taking some time to settle in and find out where the tea bags are hidden in the Red Hat middleware group its time to start things rolling again. We have been working out the messaging and integration strategy and will be have an engineering face to face meeting in Dublin, Ireland in week beginning the 23rd September 2013. Its short notice - but we could hold an impromptu Fuse day in Dublin that week.
You may even get to find out what we are doing in 2014 before the engineers!

Drop me a line if you want to attend - it'll be free - you just have to get yourself to Dublin. I'll be posting dates for upcoming Fuse days in Europe and the US over the next couple of weeks - now where's my cup of tea ...