Our annual Member’s Meeting for the ASF went well, resulting in some new members getting elected as well as two new directors being elected to the board. While we wait for a bit of paperwork to get filed, let’s document what needs to happen after a Member’s Meeting at Apache.
The ASF is a membership corporation and holds an Annual Member’s Meeting every year to elect the board and nominate/elect new members. As a volunteer-run software organization, we run this process by – wait for it – by emailing around a set of cryptically formatted text files from our private Subversion repository. Of course, as (mostly) software people, we could make it easier on ourselves… with better software. Shoemaker’s children, indeed.
The ASF is holding it’s annual Member’s meeting now, where Members get to elect a new board as well as elect new individual Members to the Foundation. We do this by holding a live IRC meeting on a Tuesday, then we vote with secure email ballots asynchronously during the recess, then reconvene on Thursday to announce results. But how does the meeting really work?
Some great recent discussions around the upcoming member’s meeting have got me to thinking about the larger question: how can the ASF as an organization function better, and how does the board effect that? I think there is one more important concept in a board that the ASF needs to have, along with oversight and vision.
The ASF is holding it’s annual Member’s Meeting next week to elect a new board and a number of new Members to the ASF. I’m honored to have been nominated to stand for the board election, and I’m continuing my tradition of publicly posting my vision for Apache each year.
Please read on for my take on what’s important for the ASF’s future…
The ASF is holding it’s annual member’s meeting soon, where we will elect a new 9-member Board of Directors for a one-year term. I’ve been honored with a nomination to run for the board again, as have a number of other excellent Member candidates. While I’m writing my nomination statement – my 2016 director statement and earlier ones are posted – I’ve been thinking about what Apache really needs in a board to manage the growth of our projects and to improve our operations.
You probably use contribute to several Apache projects. But do you know what goes on behind the scenes at the ASF? Besides all the work of the 200+ project communities, the ASF has an annual budget of about one $million USD to fund the services our projects use. How we manage providing these services – and governing the corporation behind the projects – continues to change and improve.
Juggling several speaking engagements coming up, I’m reminded of how hard the job of conference organizers is. Having helped to run ApacheCon as part of a volunteer team for years, I know how hard it is selecting talks, wrangling speaker acceptances (and rejections), and ensuring your final conference schedule is appealing. And wrangling your clunky CFP system and keeping the finicky schedule website updated are two problems that software hasn’t solved yet.
Equally important is how the conference acceptance & organization process works from the speaker’s side. Remember? Those people who make all the content your conference relies on? All those people who you love and appreciate – but don’t who you don’t pay anything – and who you’ll do anything to fix last minute problems for? While we can’t prevent all the last minute problems, there are a few simple steps to improve the speaker communication process to help prevent problems.
Website Brand Review of Apache Hadoop
We’ve all heard of Apache® Hadoop® – well, at least heard of Hadoop, and by now you should realize it’s an Apache project! But when was the last time you took a critical eye to the actual Apache Hadoop project’s homepage?.
Here’s my quick review of the Apache Hadoop project, told purely from the point of view of a new user finding the project website.
What Is Apache Hadoop?
“Apache Hadoop (is) a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models”
“Hadoop is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-availability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures.”
Website Brand Review of Apache Mahout
While we’ve all heard about Apache Hadoop, did you know there are over a dozen big data projects at Apache? We host projects that provide everything for your big data stack: databases, storage, streaming, logging, analysis, machine learning, and more. Apache Mahout is one of the pieces that puts a big data stack to do higher-level work for you.
Here’s my quick review of the Apache Mahout project, told purely from the point of view of a new user finding the project website.
Happy Birthday! This month is the Apache Mahout project’s 6th #ApacheBirthday!
What Is Apache Mahout?
“The Apache Mahout™ project’s goal is to build an environment for quickly creating scalable performant machine learning applications.”
While this is a laudable statement – and nicely emphasises the community behind the project – it doesn’t directly say what the software they provide does.
“The three major components of Mahout are an environment for building scalable algorithms, many new Scala + Spark and H2O (Apache Flink in progress) algorithms, and Mahout’s mature Hadoop MapReduce algorithms.”