SLE 11 sp4 安装CDH 5.15.0
目录
一、环境准备
1.1 CDH程序包下载
SRE实战 互联网时代守护先锋,助力企业售后服务体系运筹帷幄!一键直达领取阿里云限量特价优惠。https://www.jianshu.com/p/434a429a9c6e
1.1.1 hadoop
本次使用CDH5.15.0的版本(建议使用CDH5.14.2,能够与phoenix整合 4.14.0)
开源hadoop 包下载地址(主要用来查看hadoop组件的版本,一般不下载)
1.1.2 CM
1.1.3 JDK
1.1.4 phoenix
1.1.5 kafka
http://archive.cloudera.com/csds/kafka/
版本对照表:
https://www.cloudera.com/documentation/enterprise/release-notes/topics/rn_consolidated_pcm.html#pcm_kafka
https://www.cloudera.com/documentation/kafka/latest.html
必须看: 建议使用3.1.1版本
https://www.cloudera.com/documentation/kafka/latest/topics/kafka_requirements.html
vim /etc/security/limits.conf
* hard as unlimited
* soft as unlimited
kafka版本对比:
https://blog.csdn.net/bigdata_mining/article/details/80854372
1.1.6 总结:
CDH5.14 下载地址:
cm:
http://archive.cloudera.com/cm5/sles/11/x86_64/cm/5.14.2/RPMS/x86_64/
cdh:
http://archive.cloudera.com/cdh5/parcels/5.14.2/
CDH5文档:https://www.cloudera.com/documentation/enterprise/5-15-x/topics/installation.html
maven仓库:https://www.cloudera.com/documentation/enterprise/release-notes/topics/cdh_vd_cdh5_maven_repo_514x.html#maven_5142
phoenix下载地址:
http://www.apache.org/dist/phoenix/apache-phoenix-4.14.0-cdh5.14.2/parcels/
phoenix-maven仓库:直接从maven仓库查即可。
kefka下载地址:(后期需要用到kafka的话,那么必须使用JDK8,kafka不支持JDK7)
http://archive.cloudera.com/kafka/parcels/3.1.1/
http://archive.cloudera.com/csds/kafka/
kafka文档地址:https://www.cloudera.com/documentation/kafka/latest.html
kafka-maven仓库:https://www.cloudera.com/documentation/kafka/latest/topics/kafka_packaging.html#concept_kafka_maven
Spark2下载地址:http://archive.cloudera.com/spark2/parcels/2.2.0.cloudera4/
http://archive.cloudera.com/spark2/csd/
spark2文档地址:https://www.cloudera.com/documentation/spark2/2-2-x.html
spark2-maven仓库:https://www.cloudera.com/documentation/spark2/2-2-x/topics/cds_22_maven_artifacts.html#cds_220_maven_artifacts
1.2 将CDH的CM包设置为SLE的repo
将所有的cm repo中的软件,下载后,放置到/srv/www/SLE-11-sp4/CM
目录下;并将从oracle 官网上下载的jdk1.8 也放置到该目录下。
然后使用createrepo .
来解析依赖关系,并声称 repo.xml文件。
1.3 时间服务器
检查集群的时间服务器;是否可用
每台服务器都要查看
1.4 安装数据库
1.4.1 mysql
mysql下载地址:
https://cdn.mysql.com//Downloads/MySQL-5.7/mysql-5.7.25-1.sles11.x86_64.rpm-bundle.tar
mysql connector下载地址:
https://cdn.mysql.com//Downloads/Connector-J/mysql-connector-java-5.1.47.tar.gz
- 对于MySQL 5.6和5.7,您必须安装MySQL-shared-compat或MySQL-shared软件包。这是Cloudera Manager Agent软件包安装所必需的。
1.4.2 PostgreSQL
推荐使用,操作系统自带安装包,但是不好安装,需要python2.7,
https://www.postgresql.org/download/linux/suse/
1.5 普通用户所需权限
https://www.suse.com/zh-cn/documentation/sle-ha-12/book_sleha/data/sec_crmreport_nonroot_sudo.html
$ visudo
#Defaults targetpw
#ALL ALL=(ALL) ALL
User_Alias CDH_INSTALLER=omm
Cmnd_Alias CDH_CMD= /bin/chown,/sbin/service,/bin/rm,/usr/bin/id,/usr/bin/install, /sbin/chkconfig,/usr/bin/sed,/bin/mv,/usr/sbin/ntpdate,/usr/bin/zypper
CDH_INSTALLER ALL=(ALL) NOPASSWD: CDH_CMD
omm ALL=(ALL) NOPASSWD: CDH_CMD
%wheel ALL=(ALL) NOPASSWD: CDH_CMD
cloudera-scm ALL=(ALL) NOPASSWD: ALL
# 需要将命令中所有的目录添加到omm用户的path中,
omm@CRM-CSHC4:~> vim ~/.profile
export PATH=/bin:/sbin:/usr/bin:/usr/sbin:$PATH
omm@CRM-CSHC4:~> source ~/.profile
1.6 检测端口
ss -tnlp | grep xxxxx
所有端口:https://www.cloudera.com/documentation/enterprise/5-15-x/topics/cm_ig_ports.html#concept_k5z_vwy_4j
1.7 更新本地的zypper仓库
CRM-CSHC1:/srv/BigData/SLE-DVD # mv CM ../../www/SLE-11-sp4/
CRM-CSHC1:/srv/www/SLE-11-sp4 # createrepo .
Spawning worker 0 with 5670 pkgs
Workers Finished
Gathering worker results
Saving Primary metadata
Saving file lists metadata
Saving other metadata
二、安装cloudera-manager
2.1 安装JDK1.8
# cdh官网:
sudo yum install oracle-j2sdk1.7
建议安装cdh1.8
sudo zypper install jdk-1.8_u201.x86_64.rpm
已经安装了jdk1.8
omm@CRM-CSHC1:/srv/BigData> java -version
java version "1.8.0_162"
Java(TM) SE Runtime Environment (build 1.8.0_162-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.162-b12, mixed mode)
2.2 安装数据库
安装mysql
2.3.1 安装mysql
CRM-CSHC1:/srv/www # zypper in mysql*
Loading repository data...
Reading installed packages...
Resolving package dependencies...
The following NEW packages are going to be installed:
libmysqlclient15 log4j mysql mysql-client mysql-connector-java mysql-tools perl-DBD-mysql perl-Data-ShowTable
The following packages are not supported by their vendor:
libmysqlclient15 log4j mysql mysql-client mysql-connector-java mysql-tools perl-DBD-mysql perl-Data-ShowTable
8 new packages to install.
Overall download size: 20.2 MiB. After the operation, additional 83.0 MiB will be used.
Continue? [y/n/? shows all options] (y): y
Retrieving package libmysqlclient15-5.0.96-0.6.20.x86_64 (1/8), 473.0 KiB (1.4 MiB unpacked)
Retrieving: libmysqlclient15-5.0.96-0.6.20.x86_64.rpm [done]
Retrieving package log4j-1.2.15-26.32.6.noarch (2/8), 352.0 KiB (394.0 KiB unpacked)
Retrieving: log4j-1.2.15-26.32.6.noarch.rpm [done]
Retrieving package mysql-client-5.5.43-0.7.3.x86_64 (3/8), 3.1 MiB (17.1 MiB unpacked)
Retrieving: mysql-client-5.5.43-0.7.3.x86_64.rpm [done]
Retrieving package perl-Data-ShowTable-3.3-705.10.x86_64 (4/8), 53.0 KiB (117.0 KiB unpacked)
Retrieving: perl-Data-ShowTable-3.3-705.10.x86_64.rpm [done]
Retrieving package mysql-connector-java-5.1.6-1.27.noarch (5/8), 671.0 KiB (817.0 KiB unpacked)
Retrieving: mysql-connector-java-5.1.6-1.27.noarch.rpm [done]
Retrieving package mysql-5.5.43-0.7.3.x86_64 (6/8), 11.0 MiB (42.5 MiB unpacked)
Retrieving: mysql-5.5.43-0.7.3.x86_64.rpm [done]
Retrieving package perl-DBD-mysql-4.008-4.3.x86_64 (7/8), 154.0 KiB (427.0 KiB unpacked)
Retrieving: perl-DBD-mysql-4.008-4.3.x86_64.rpm [done]
Retrieving package mysql-tools-5.5.43-0.7.3.x86_64 (8/8), 4.5 MiB (20.3 MiB unpacked)
Retrieving: mysql-tools-5.5.43-0.7.3.x86_64.rpm [done]
Installing: libmysqlclient15-5.0.96-0.6.20 [done]
Installing: log4j-1.2.15-26.32.6 [done]
Installing: mysql-client-5.5.43-0.7.3 [done]
Installing: perl-Data-ShowTable-3.3-705.10 [done]
Installing: mysql-connector-java-5.1.6-1.27 [done]
Installing: mysql-5.5.43-0.7.3 [done]
Installing: perl-DBD-mysql-4.008-4.3 [done]
Installing: mysql-tools-5.5.43-0.7.3 [done]
2.3.2 修改配置文件
CRM-CSHC1:/srv/www # mv /etc/my.cnf /etc/my.cnf.bak
CRM-CSHC1:/srv/www # vim /etc/my.cnf
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
transaction-isolation = READ-COMMITTED
# Disabling symbolic-links is recommended to prevent assorted security risks;
# to do so, uncomment this line:
symbolic-links = 0
key_buffer_size = 32M
max_allowed_packet = 32M
thread_stack = 256K
thread_cache_size = 64
query_cache_limit = 8M
query_cache_size = 64M
query_cache_type = 1
max_connections = 550
#expire_logs_days = 10
#max_binlog_size = 100M
#log_bin should be on a disk with enough free space.
#Replace '/var/lib/mysql/mysql_binary_log' with an appropriate path for your
#system and chown the specified folder to the mysql user.
log_bin=/var/lib/mysql/mysql_binary_log
#In later versions of MySQL, if you enable the binary log and do not set
#a server_id, MySQL will not start. The server_id must be unique within
#the replicating group.
server_id=1
binlog_format = mixed
read_buffer_size = 2M
read_rnd_buffer_size = 16M
sort_buffer_size = 8M
join_buffer_size = 8M
# InnoDB settings
innodb_file_per_table = 1
innodb_flush_log_at_trx_commit = 2
innodb_log_buffer_size = 64M
innodb_buffer_pool_size = 4G
innodb_thread_concurrency = 8
innodb_flush_method = O_DIRECT
innodb_log_file_size = 512M
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
sql_mode=STRICT_ALL_TABLES
CRM-CSHC1:/srv/www # service mysql star
2.3.3 启动mysql
CRM-CSHC1:/srv/www # service mysql start
Creating MySQL privilege database...
Installing MySQL system tables...
190416 10:28:07 [Note] /usr/sbin/mysqld (mysqld 5.5.43-log) starting as process 25628 ...
OK
Filling help tables...
190416 10:28:07 [Note] /usr/sbin/mysqld (mysqld 5.5.43-log) starting as process 25740 ...
OK
PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER !
To do so, start the server, then issue the following commands:
/usr/bin/mysqladmin -u root password 'new-password'
/usr/bin/mysqladmin -u root -h CRM-CSHC1 password 'new-password'
Alternatively you can run:
/usr/bin/mysql_secure_installation
which will also give you the option of removing the test
databases and anonymous user created by default. This is
strongly recommended for production servers.
See the manual for more instructions.
You can start the MySQL daemon with:
rcmysql start
You can test the MySQL daemon with mysql-test package
Please report any problems at http://bugs.mysql.com/
Starting service MySQL done
CRM-CSHC1:/srv/www # ss -tnlp | grep mysql
0 50 *:3306 *:* users:(("mysqld",26323,11))
CRM-CSHC1:/srv/www #
2.3.4 配置初始化密码
CRM-CSHC1:/srv/www # /usr/bin/mysql_secure_installation
NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MySQL
SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!
In order to log into MySQL to secure it, we'll need the current
password for the root user. If you've just installed MySQL, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.
Enter current password for root (enter for none):
OK, successfully used password, moving on...
Setting the root password ensures that nobody can log into the MySQL
root user without the proper authorisation.
Set root password? [Y/n] y
New password: "newtouch"
Re-enter new password:
Password updated successfully!
Reloading privilege tables..
... Success!
By default, a MySQL installation has an anonymous user, allowing anyone
to log into MySQL without having to have a user account created for
them. This is intended only for testing, and to make the installation
go a bit smoother. You should remove them before moving into a
production environment.
Remove anonymous users? [Y/n] y
... Success!
Normally, root should only be allowed to connect from 'localhost'. This
ensures that someone cannot guess at the root password from the network.
Disallow root login remotely? [Y/n] y
... Success!
By default, MySQL comes with a database named 'test' that anyone can
access. This is also intended only for testing, and should be removed
before moving into a production environment.
Remove test database and access to it? [Y/n] y
- Dropping test database...
... Success!
- Removing privileges on test database...
... Success!
Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.
Reload privilege tables now? [Y/n] y
... Success!
Cleaning up...
All done! If you've completed all of the above steps, your MySQL
installation should now be secure.
Thanks for using MySQL!
2.3.5 安装mysql-connector
已经安装完成,查看2.3.1
手动安装
cp mysql-connector-java-5.1.34.jar /usr/share/java/
ln -s /usr/share/java/mysql-connector-java-5.1.40-bin.jar /usr/share/java/mysql-connector-java.jar
2.3.6 创建cloudera所必须的数据库,,可以不用创建
Service | Database | User |
---|---|---|
Cloudera Manager Server | scm | scm |
Activity Monitor | amon | amon |
Reports Manager | rman | rman |
Hue | hue | hue |
Hive Metastore Server | metastore | hive |
Sentry Server | sentry | sentry |
Cloudera Navigator Audit Server | nav | nav |
Cloudera Navigator Metadata Server | navms | navms |
Oozie | oozie | oozie |
-- scm
create database scm default character set utf8 DEFAULT COLLATE utf8_general_ci;
GRANT ALL ON scm.* TO 'scm'@'%' IDENTIFIED BY 'scm';
flush privileges;
-- amon
create database amon default character set utf8 DEFAULT COLLATE utf8_general_ci;
grant all on amon.* to 'amon'@'%' IDENTIFIED BY 'amon';
flush privileges;
-- rman
create database rman default character set utf8 DEFAULT COLLATE utf8_general_ci;
grant all on rman.* to 'rman'@'%' identified by 'rman';
flush privileges;
--hue
create database hue default character set utf8 DEFAULT COLLATE utf8_general_ci;
grant all on hue.* to 'hue'@'%' identified by 'hue';
flush privileges;
-- hive
create database metastore default character set utf8 DEFAULT COLLATE utf8_general_ci;
GRANT ALL ON metastore.* TO 'hive'@'%' IDENTIFIED BY 'hive';
flush privileges;
-- sentry
create database sentry default character set utf8 DEFAULT COLLATE utf8_general_ci;
GRANT ALL ON sentry.* TO 'sentry'@'%' IDENTIFIED BY 'sentry';
flush privileges;
-- nav
create database nav default character set utf8 DEFAULT COLLATE utf8_general_ci;
GRANT ALL ON nav.* TO 'nav'@'%' IDENTIFIED BY 'nav';
flush privileges;
-- navms
create database navms default character set utf8 DEFAULT COLLATE utf8_general_ci;
GRANT ALL ON navms.* TO 'navms'@'%' IDENTIFIED BY 'navms';
flush privileges;
-- oozie
create database oozie default character set utf8 DEFAULT COLLATE utf8_general_ci;
GRANT ALL ON oozie.* TO 'oozie'@'%' IDENTIFIED BY 'oozie';
flush privileges;
2.3 安装Cloudera Manager Server软件包
2.3.1 master安装
Master:
sudo zypper in cloudera-manager-daemons cloudera-manager-server cloudera-manager-agent
如果主节点不需要安装hadoop组件,则可以不安装agent
如果使用Oracle数据库,请增加Cloudera Manager Server的Java堆大小。编辑/etc/default/cloudera-scm-server
中Cloudera Manager服务器主机上的文件。找到以.
开头的行导出CM_JAVA_OPTS 并改变 -Xmx2G 选项 -Xmx4G。
2.3.2 所有主机都需要安装的
sudo zypper in cloudera-manager-daemons cloudera-manager-agent
将Cloudera Manager Agent配置为指向Cloudera Manager Server /etc/cloudera-scm-agent/config.ini
配置文件:
属性 | 描述 |
---|---|
SERVER_HOST | 运行Cloudera Manager Server的主机的名称。 |
服务器端口 | 运行Cloudera Manager Server的主机上的端口。 |
vim /etc/cloudera-scm-agent/config.ini
server_host=CRM-CSHC2
2.3.3 启用数据库
sudo /usr/share/cmf/schema/scm_prepare_database.sh <databaseType> <databaseName> <databaseUser>
已经创建好数据库的:
CRM-CSHC2:/etc/zypp/repos.d # sudo /usr/share/cmf/schema/scm_prepare_database.sh mysql scm scm
Enter SCM password:
JAVA_HOME=/usr/java/jdk1.8.0_201-amd64
Verifying that we can write to /etc/cloudera-scm-server
Creating SCM configuration file in /etc/cloudera-scm-server
Executing: /usr/java/jdk1.8.0_201-amd64/bin/java -cp /usr/share/java/mysql-connector-java.jar:/usr/share/java/oracle-connector-java.jar:/usr/share/java/postgresql-connector-java.jar:/usr/share/cmf/schema/../lib/* com.cloudera.enterprise.dbutil.DbCommandExecutor /etc/cloudera-scm-server/db.properties com.cloudera.cmf.db.
[ main] DbCommandExecutor INFO Successfully connected to database.
All done, your SCM database is configured correctly!
可以自动创建数据库
sudo /usr/share/cmf/schema/scm_prepare_database.sh mysql -h CRM-CSHC2 -uroot -pnewtouch --scm-host CRM-CSHC2 scm scm
如果mysql不与cloudera-manager-server在同一台服务器上时
sudo /usr/share/cmf/schema/scm_prepare_database.sh mysql -h CRM-CSHC2 --scm-host CRM-CSHC2 scm scm
2.3.4 将parcel,放置到指定目录下
CRM-CSHC2:/etc/zypp/repos.d # cd /opt/cloudera/parcel-repo
CRM-CSHC2:/opt/cloudera/parcel-repo # ls
CRM-CSHC2:/opt/cloudera/parcel-repo # cp /home/omm/CDH-SLE11/hadoop-parcels/* .
CRM-CSHC2:/opt/cloudera # chown -R cloudera-scm:cloudera-scm parcel-repo/
2.3.5 启动
sudo service cloudera-scm-agent start
sudo service cloudera-scm-server start
sudo tail -f /var/log/cloudera-scm-server/cloudera-scm-server.log
三、安装hadoop
3.1 启动CM
如果此处没有CDH版本选择,说明没有加载manifest.json,而且日志中有报错信息。
解决方法是在本地启动一个http服务,然后在/etc/hosts文件中 添加一条记录
echo "127.0.0.1 archive.cloudera.com" >>/etc/hosts
然后根据错误信息中的地址,创建web的目录,将manifest.json 连接到web服务的目录下
CRM-CSHC2:/opt/cloudera/parcel-repo # cd /srv/www/SLE-11-sp4/
CRM-CSHC2:/srv/www/SLE-11-sp4 # mkdir -p cdh5/parcels/5.15/
CRM-CSHC2:/srv/www/SLE-11-sp4 # cd cdh5/parcels/5.15/
CRM-CSHC2:/srv/www/SLE-11-sp4/cdh5/parcels/5.15 # ln -s /opt/cloudera/parcel-repo/manifest.json manifest.json
CRM-CSHC2:/srv/www/SLE-11-sp4/cdh5/parcels/5.15 # ln -s /opt/cloudera/parcel-repo/CDH-5.15.0-1.cdh5.15.0.p0.21-sles11.parcel CDH-5.15.0-1.cdh5.15.0.p0.21-sles11.parcel
CRM-CSHC2:/srv/www/SLE-11-sp4/cdh5/parcels/5.15 # ln -s /opt/cloudera/parcel-repo/CDH-5.15.0-1.cdh5.15.0.p0.21-sles11.parcel.sha CDH-5.15.0-1.cdh5.15.0.p0.21-sles11.parcel.sha
CRM-CSHC2:/srv/www/SLE-11-sp4/cdh5/parcels/5.15 # ll
total 8
lrwxrwxrwx 1 root root 68 Apr 17 12:55 CDH-5.15.0-1.cdh5.15.0.p0.21-sles11.parcel -> /opt/cloudera/parcel-repo/CDH-5.15.0-1.cdh5.15.0.p0.21-sles11.parcel
lrwxrwxrwx 1 root root 72 Apr 17 12:55 CDH-5.15.0-1.cdh5.15.0.p0.21-sles11.parcel.sha -> /opt/cloudera/parcel-repo/CDH-5.15.0-1.cdh5.15.0.p0.21-sles11.parcel.sha
lrwxrwxrwx 1 root root 39 Apr 17 11:13 manifest.json -> /opt/cloudera/parcel-repo/manifest.json
将所有的远程parcel存储删除,添加上本地的存储库
然后重启cloudera-scm-server服务。
等待激活完成
这个先不管吧,--- 先安装试试、
先使用默认的
3.2 配置hdfs ha
3.3 配置yarn ha
3.4 测试使用查看各组件网页
3.5 关于oozie打不开web页面的问题
Ext2.2正式发布了,刚下载体验了下。
在ext官方网上下载 www.extjs.com
EXT下载:http://extjs.com/products/extjs/download.php
完整信息:http://extjs.com/blog/2008/08/04/ext-22-released/
主要是由于ext2.2 导致的
CRM-CSHC6:/opt/cloudera/parcels/CDH/lib/oozie/webapps/oozie # ll
total 148
drwxr-xr-x 3 root root 4096 May 24 2018 META-INF
drwxr-xr-x 3 root root 4096 May 24 2018 WEB-INF
drwxr-xr-x 2 root root 4096 May 24 2018 admin
drwxr-xr-x 3 root root 4096 May 24 2018 console
lrwxrwxrwx 1 root root 37 May 24 2018 docs -> ../../../../share//doc/packages/oozie
lrwxrwxrwx 1 root root 22 May 24 2018 ext-2.2 -> /var/lib/oozie/ext-2.2
-rw-r--r-- 1 root root 3734 May 24 2018 index.html
-rw-r--r-- 1 root root 17871 May 24 2018 json2.js
-rw-r--r-- 1 root root 1108 May 24 2018 oozie-console.css
-rw-r--r-- 1 root root 92235 May 24 2018 oozie-console.js
-rw-r--r-- 1 root root 4660 May 24 2018 oozie_50x.png
CRM-CSHC6:/opt/cloudera/parcels/CDH/lib/oozie/webapps/oozie # cd /var/lib/oozie/CRM-CSHC6:/var/lib/oozie # ls
.bash_history .gnu-emacs .profile bin
.bashrc .inputrc .vimrc mysql-connector-java.jar
.emacs .mozilla .xim.template tomcat-deployment
.fonts .muttrc .xinitrc.template
下载完成上传弄到此处即可
四、问题
4.1 安装过程中出现主机运行状态不良情况的解决
或者是 发现未收管理的主机数量不对时,都是由于所有节点上的agent中 cm-guid不一致导致的;
解决方法:
首先停止所有的 cloudera-scm-agent服务,然后删除一下文件
cd /var/lib/cloudera-scm-agent/
rm -rf *
在启动所有的cloudera-scm-agent即可。
4.2 注意
- /opt/cloudera/percel-repo 必须是 cloudera-scm用户和cloudera-scm组的权限。才能使用
- 安装kakfa时,需要将老的json文件从新命名;
- 关于sudo,必须注释掉
#Defaults targetpw #ALL ALL=(ALL) ALL
两行;
五、CDH 5.15.0 的maven仓库
Project | groupId | artifactId | version |
---|---|---|---|
Apache Hadoop | org.apache.hadoop | hadoop-annotations | 2.6.0-cdh5.15.0 |
org.apache.hadoop | hadoop-ant | 2.6.0-cdh5.15.0 | |
org.apache.hadoop | hadoop-archive-logs | 2.6.0-cdh5.15.0 | |
org.apache.hadoop | hadoop-archives | 2.6.0-cdh5.15.0 | |
org.apache.hadoop | hadoop-assemblies | 2.6.0-cdh5.15.0 | |
org.apache.hadoop | hadoop-auth | 2.6.0-cdh5.15.0 | |
org.apache.hadoop | hadoop-aws | 2.6.0-cdh5.15.0 | |
org.apache.hadoop | hadoop-azure | 2.6.0-cdh5.15.0 | |
org.apache.hadoop | hadoop-azure-datalake | 2.6.0-cdh5.15.0 | |
org.apache.hadoop | hadoop-build-tools | 2.6.0-cdh5.15.0 | |
org.apache.hadoop | hadoop-common | 2.6.0-cdh5.15.0 | |
org.apache.hadoop | hadoop-datajoin | 2.6.0-cdh5.15.0 | |
org.apache.hadoop | hadoop-distcp | 2.6.0-cdh5.15.0 | |
org.apache.hadoop | hadoop-extras | 2.6.0-cdh5.15.0 | |
org.apache.hadoop | hadoop-gridmix | 2.6.0-cdh5.15.0 | |
org.apache.hadoop | hadoop-hdfs | 2.6.0-cdh5.15.0 | |
org.apache.hadoop | hadoop-hdfs-nfs | 2.6.0-cdh5.15.0 | |
org.apache.hadoop | hadoop-mapreduce-client-app | 2.6.0-cdh5.15.0 | |
org.apache.hadoop | hadoop-mapreduce-client-common | 2.6.0-cdh5.15.0 | |
org.apache.hadoop | hadoop-mapreduce-client-core | 2.6.0-cdh5.15.0 | |
org.apache.hadoop | hadoop-mapreduce-client-hs | 2.6.0-cdh5.15.0 | |
org.apache.hadoop | hadoop-mapreduce-client-hs-plugins | 2.6.0-cdh5.15.0 | |
org.apache.hadoop | hadoop-mapreduce-client-jobclient | 2.6.0-cdh5.15.0 | |
org.apache.hadoop | hadoop-mapreduce-client-nativetask | 2.6.0-cdh5.15.0 | |
org.apache.hadoop | hadoop-mapreduce-client-shuffle | 2.6.0-cdh5.15.0 | |
org.apache.hadoop | hadoop-mapreduce-examples | 2.6.0-cdh5.15.0 | |
org.apache.hadoop | hadoop-maven-plugins | 2.6.0-cdh5.15.0 | |
org.apache.hadoop | hadoop-minikdc | 2.6.0-cdh5.15.0 | |
org.apache.hadoop | hadoop-nfs | 2.6.0-cdh5.15.0 | |
org.apache.hadoop | hadoop-openstack | 2.6.0-cdh5.15.0 | |
org.apache.hadoop | hadoop-rumen | 2.6.0-cdh5.15.0 | |
org.apache.hadoop | hadoop-sls | 2.6.0-cdh5.15.0 | |
org.apache.hadoop | hadoop-yarn-api | 2.6.0-cdh5.15.0 | |
org.apache.hadoop | hadoop-yarn-applications-distributedshell | 2.6.0-cdh5.15.0 | |
org.apache.hadoop | hadoop-yarn-applications-unmanaged-am-launcher | 2.6.0-cdh5.15.0 | |
org.apache.hadoop | hadoop-yarn-client | 2.6.0-cdh5.15.0 | |
org.apache.hadoop | hadoop-yarn-common | 2.6.0-cdh5.15.0 | |
org.apache.hadoop | hadoop-yarn-registry | 2.6.0-cdh5.15.0 | |
org.apache.hadoop | hadoop-yarn-server-applicationhistoryservice | 2.6.0-cdh5.15.0 | |
org.apache.hadoop | hadoop-yarn-server-common | 2.6.0-cdh5.15.0 | |
org.apache.hadoop | hadoop-yarn-server-nodemanager | 2.6.0-cdh5.15.0 | |
org.apache.hadoop | hadoop-yarn-server-resourcemanager | 2.6.0-cdh5.15.0 | |
org.apache.hadoop | hadoop-yarn-server-tests | 2.6.0-cdh5.15.0 | |
org.apache.hadoop | hadoop-yarn-server-web-proxy | 2.6.0-cdh5.15.0 | |
org.apache.hadoop.contrib | hadoop-hdfs-bkjournal | 2.6.0-cdh5.15.0 | |
Apache Hadoop MRv1 | org.apache.hadoop | hadoop-client | 2.6.0-mr1-cdh5.15.0 |
org.apache.hadoop | hadoop-core | 2.6.0-mr1-cdh5.15.0 | |
org.apache.hadoop | hadoop-examples | 2.6.0-mr1-cdh5.15.0 | |
org.apache.hadoop | hadoop-minicluster | 2.6.0-mr1-cdh5.15.0 | |
org.apache.hadoop | hadoop-streaming | 2.6.0-mr1-cdh5.15.0 | |
org.apache.hadoop | hadoop-test | 2.6.0-mr1-cdh5.15.0 | |
org.apache.hadoop | hadoop-tools | 2.6.0-mr1-cdh5.15.0 | |
Apache Hive | org.apache.hive | hive-accumulo-handler | 1.1.0-cdh5.15.0 |
org.apache.hive | hive-ant | 1.1.0-cdh5.15.0 | |
org.apache.hive | hive-beeline | 1.1.0-cdh5.15.0 | |
org.apache.hive | hive-classification | 1.1.0-cdh5.15.0 | |
org.apache.hive | hive-cli | 1.1.0-cdh5.15.0 | |
org.apache.hive | hive-common | 1.1.0-cdh5.15.0 | |
org.apache.hive | hive-contrib | 1.1.0-cdh5.15.0 | |
org.apache.hive | hive-exec | 1.1.0-cdh5.15.0 | |
org.apache.hive | hive-hbase-handler | 1.1.0-cdh5.15.0 | |
org.apache.hive | hive-hwi | 1.1.0-cdh5.15.0 | |
org.apache.hive | hive-jdbc | 1.1.0-cdh5.15.0 | |
org.apache.hive | hive-metastore | 1.1.0-cdh5.15.0 | |
org.apache.hive | hive-serde | 1.1.0-cdh5.15.0 | |
org.apache.hive | hive-service | 1.1.0-cdh5.15.0 | |
org.apache.hive | hive-shims | 1.1.0-cdh5.15.0 | |
org.apache.hive | hive-testutils | 1.1.0-cdh5.15.0 | |
org.apache.hive | spark-client | 1.1.0-cdh5.15.0 | |
org.apache.hive.hcatalog | hive-hcatalog-core | 1.1.0-cdh5.15.0 | |
org.apache.hive.hcatalog | hive-hcatalog-pig-adapter | 1.1.0-cdh5.15.0 | |
org.apache.hive.hcatalog | hive-hcatalog-server-extensions | 1.1.0-cdh5.15.0 | |
org.apache.hive.hcatalog | hive-hcatalog-streaming | 1.1.0-cdh5.15.0 | |
org.apache.hive.hcatalog | hive-webhcat | 1.1.0-cdh5.15.0 | |
org.apache.hive.hcatalog | hive-webhcat-java-client | 1.1.0-cdh5.15.0 | |
org.apache.hive.shims | hive-shims-0.23 | 1.1.0-cdh5.15.0 | |
org.apache.hive.shims | hive-shims-common | 1.1.0-cdh5.15.0 | |
org.apache.hive.shims | hive-shims-scheduler | 1.1.0-cdh5.15.0 | |
Apache HBase | org.apache.hbase | hbase-annotations | 1.2.0-cdh5.15.0 |
org.apache.hbase | hbase-checkstyle | 1.2.0-cdh5.15.0 | |
org.apache.hbase | hbase-client | 1.2.0-cdh5.15.0 | |
org.apache.hbase | hbase-common | 1.2.0-cdh5.15.0 | |
org.apache.hbase | hbase-examples | 1.2.0-cdh5.15.0 | |
org.apache.hbase | hbase-external-blockcache | 1.2.0-cdh5.15.0 | |
org.apache.hbase | hbase-hadoop-compat | 1.2.0-cdh5.15.0 | |
org.apache.hbase | hbase-hadoop2-compat | 1.2.0-cdh5.15.0 | |
org.apache.hbase | hbase-it | 1.2.0-cdh5.15.0 | |
org.apache.hbase | hbase-prefix-tree | 1.2.0-cdh5.15.0 | |
org.apache.hbase | hbase-procedure | 1.2.0-cdh5.15.0 | |
org.apache.hbase | hbase-protocol | 1.2.0-cdh5.15.0 | |
org.apache.hbase | hbase-resource-bundle | 1.2.0-cdh5.15.0 | |
org.apache.hbase | hbase-rest | 1.2.0-cdh5.15.0 | |
org.apache.hbase | hbase-rsgroup | 1.2.0-cdh5.15.0 | |
org.apache.hbase | hbase-server | 1.2.0-cdh5.15.0 | |
org.apache.hbase | hbase-shaded-client | 1.2.0-cdh5.15.0 | |
org.apache.hbase | hbase-shaded-server | 1.2.0-cdh5.15.0 | |
org.apache.hbase | hbase-shell | 1.2.0-cdh5.15.0 | |
org.apache.hbase | hbase-spark | 1.2.0-cdh5.15.0 | |
org.apache.hbase | hbase-testing-util | 1.2.0-cdh5.15.0 | |
org.apache.hbase | hbase-thrift | 1.2.0-cdh5.15.0 | |
Apache ZooKeeper | org.apache.zookeeper | zookeeper | 3.4.5-cdh5.15.0 |
Apache Sqoop | org.apache.sqoop | sqoop | 1.4.6-cdh5.15.0 |
Apache Pig | org.apache.pig | pig | 0.12.0-cdh5.15.0 |
org.apache.pig | piggybank | 0.12.0-cdh5.15.0 | |
org.apache.pig | pigsmoke | 0.12.0-cdh5.15.0 | |
org.apache.pig | pigunit | 0.12.0-cdh5.15.0 | |
Apache Flume 1.x | org.apache.flume | flume-checkstyle | 1.7.0-cdh5.15.0 |
org.apache.flume | flume-ng-auth | 1.7.0-cdh5.15.0 | |
org.apache.flume | flume-ng-config-filter-api | 1.7.0-cdh5.15.0 | |
org.apache.flume | flume-ng-configuration | 1.7.0-cdh5.15.0 | |
org.apache.flume | flume-ng-core | 1.7.0-cdh5.15.0 | |
org.apache.flume | flume-ng-embedded-agent | 1.7.0-cdh5.15.0 | |
org.apache.flume | flume-ng-environment-variable-config-filter | 1.7.0-cdh5.15.0 | |
org.apache.flume | flume-ng-external-process-config-filter | 1.7.0-cdh5.15.0 | |
org.apache.flume | flume-ng-hadoop-credential-store-config-filter | 1.7.0-cdh5.15.0 | |
org.apache.flume | flume-ng-node | 1.7.0-cdh5.15.0 | |
org.apache.flume | flume-ng-sdk | 1.7.0-cdh5.15.0 | |
org.apache.flume | flume-ng-tests | 1.7.0-cdh5.15.0 | |
org.apache.flume | flume-tools | 1.7.0-cdh5.15.0 | |
org.apache.flume.flume-ng-channels | flume-file-channel | 1.7.0-cdh5.15.0 | |
org.apache.flume.flume-ng-channels | flume-jdbc-channel | 1.7.0-cdh5.15.0 | |
org.apache.flume.flume-ng-channels | flume-kafka-channel | 1.7.0-cdh5.15.0 | |
org.apache.flume.flume-ng-channels | flume-spillable-memory-channel | 1.7.0-cdh5.15.0 | |
org.apache.flume.flume-ng-clients | flume-ng-log4jappender | 1.7.0-cdh5.15.0 | |
org.apache.flume.flume-ng-legacy-sources | flume-avro-source | 1.7.0-cdh5.15.0 | |
org.apache.flume.flume-ng-legacy-sources | flume-thrift-source | 1.7.0-cdh5.15.0 | |
org.apache.flume.flume-ng-sinks | flume-dataset-sink | 1.7.0-cdh5.15.0 | |
org.apache.flume.flume-ng-sinks | flume-hdfs-sink | 1.7.0-cdh5.15.0 | |
org.apache.flume.flume-ng-sinks | flume-hive-sink | 1.7.0-cdh5.15.0 | |
org.apache.flume.flume-ng-sinks | flume-irc-sink | 1.7.0-cdh5.15.0 | |
org.apache.flume.flume-ng-sinks | flume-ng-elasticsearch-sink | 1.7.0-cdh5.15.0 | |
org.apache.flume.flume-ng-sinks | flume-ng-hbase-sink | 1.6.0-cdh5.15.0 | |
org.apache.flume.flume-ng-sinks | flume-ng-kafka-sink | 1.7.0-cdh5.15.0 | |
org.apache.flume.flume-ng-sinks | flume-ng-morphline-solr-sink | 1.7.0-cdh5.15.0 | |
org.apache.flume.flume-ng-sources | flume-jms-source | 1.7.0-cdh5.15.0 | |
org.apache.flume.flume-ng-sources | flume-kafka-source | 1.7.0-cdh5.15.0 | |
org.apache.flume.flume-ng-sources | flume-scribe-source | 1.7.0-cdh5.15.0 | |
org.apache.flume.flume-ng-sources | flume-taildir-source | 1.7.0-cdh5.15.0 | |
org.apache.flume.flume-ng-sources | flume-twitter-source | 1.7.0-cdh5.15.0 | |
org.apache.flume.flume-shared | flume-shared-kafka-test | 1.7.0-cdh5.15.0 | |
Apache Oozie | org.apache.oozie | oozie-client | 4.1.0-cdh5.15.0 |
org.apache.oozie | oozie-core | 4.1.0-cdh5.15.0 | |
org.apache.oozie | oozie-examples | 4.1.0-cdh5.15.0 | |
org.apache.oozie | oozie-hadoop | 2.6.0-mr1-cdh5.15.0.oozie-4.1.0-cdh5.15.0 | |
org.apache.oozie | oozie-hadoop-distcp | 2.6.0-mr1-cdh5.15.0.oozie-4.1.0-cdh5.15.0 | |
org.apache.oozie | oozie-hadoop-test | 2.6.0-mr1-cdh5.15.0.oozie-4.1.0-cdh5.15.0 | |
org.apache.oozie | oozie-hadoop-utils | 2.6.0-mr1-cdh5.15.0.oozie-4.1.0-cdh5.15.0 | |
org.apache.oozie | oozie-hbase | 1.2.0-cdh5.15.0.oozie-4.1.0-cdh5.15.0 | |
org.apache.oozie | oozie-hcatalog | 1.1.0-cdh5.15.0.oozie-4.1.0-cdh5.15.0 | |
org.apache.oozie | oozie-sharelib-distcp | 4.1.0-cdh5.15.0 | |
org.apache.oozie | oozie-sharelib-hcatalog | 4.1.0-cdh5.15.0 | |
org.apache.oozie | oozie-sharelib-hive | 4.1.0-cdh5.15.0 | |
org.apache.oozie | oozie-sharelib-hive2 | 4.1.0-cdh5.15.0 | |
org.apache.oozie | oozie-sharelib-oozie | 4.1.0-cdh5.15.0 | |
org.apache.oozie | oozie-sharelib-pig | 4.1.0-cdh5.15.0 | |
org.apache.oozie | oozie-sharelib-spark | 4.1.0-cdh5.15.0 | |
org.apache.oozie | oozie-sharelib-sqoop | 4.1.0-cdh5.15.0 | |
org.apache.oozie | oozie-sharelib-streaming | 4.1.0-cdh5.15.0 | |
org.apache.oozie | oozie-tools | 4.1.0-cdh5.15.0 | |
org.apache.oozie.test | oozie-mini | 4.1.0-cdh5.15.0 | |
Apache Mahout | org.apache.mahout | mahout-buildtools | 0.9-cdh5.15.0 |
org.apache.mahout | mahout-core | 0.9-cdh5.15.0 | |
org.apache.mahout | mahout-examples | 0.9-cdh5.15.0 | |
org.apache.mahout | mahout-integration | 0.9-cdh5.15.0 | |
org.apache.mahout | mahout-math | 0.9-cdh5.15.0 | |
org.apache.mahout | mahout-math-scala | 0.9-cdh5.15.0 | |
Apache Whirr | org.apache.whirr | whirr-build-tools | 0.9.0-cdh5.15.0 |
org.apache.whirr | whirr-cassandra | 0.9.0-cdh5.15.0 | |
org.apache.whirr | whirr-cdh | 0.9.0-cdh5.15.0 | |
org.apache.whirr | whirr-chef | 0.9.0-cdh5.15.0 | |
org.apache.whirr | whirr-cli | 0.9.0-cdh5.15.0 | |
org.apache.whirr | whirr-core | 0.9.0-cdh5.15.0 | |
org.apache.whirr | whirr-druid | 0.9.0-cdh5.15.0 | |
org.apache.whirr | whirr-elasticsearch | 0.9.0-cdh5.15.0 | |
org.apache.whirr | whirr-examples | 0.9.0-cdh5.15.0 | |
org.apache.whirr | whirr-ganglia | 0.9.0-cdh5.15.0 | |
org.apache.whirr | whirr-hadoop | 0.9.0-cdh5.15.0 | |
org.apache.whirr | whirr-hama | 0.9.0-cdh5.15.0 | |
org.apache.whirr | whirr-hbase | 0.9.0-cdh5.15.0 | |
org.apache.whirr | whirr-kerberos | 0.9.0-cdh5.15.0 | |
org.apache.whirr | whirr-mahout | 0.9.0-cdh5.15.0 | |
org.apache.whirr | whirr-pig | 0.9.0-cdh5.15.0 | |
org.apache.whirr | whirr-puppet | 0.9.0-cdh5.15.0 | |
org.apache.whirr | whirr-solr | 0.9.0-cdh5.15.0 | |
org.apache.whirr | whirr-yarn | 0.9.0-cdh5.15.0 | |
org.apache.whirr | whirr-zookeeper | 0.9.0-cdh5.15.0 | |
Apache DataFu (Incubating) | com.linkedin.datafu | datafu | 1.1.0-cdh5.15.0 |
Apache Sqoop2 | org.apache.sqoop | connector-sdk | 1.99.5-cdh5.15.0 |
org.apache.sqoop | sqoop-client | 1.99.5-cdh5.15.0 | |
org.apache.sqoop | sqoop-common | 1.99.5-cdh5.15.0 | |
org.apache.sqoop | sqoop-common-test | 1.99.5-cdh5.15.0 | |
org.apache.sqoop | sqoop-core | 1.99.5-cdh5.15.0 | |
org.apache.sqoop | sqoop-docs | 1.99.5-cdh5.15.0 | |
org.apache.sqoop | sqoop-security | 1.99.5-cdh5.15.0 | |
org.apache.sqoop | sqoop-shell | 1.99.5-cdh5.15.0 | |
org.apache.sqoop | sqoop-tomcat | 1.99.5-cdh5.15.0 | |
org.apache.sqoop | sqoop-tools | 1.99.5-cdh5.15.0 | |
org.apache.sqoop | test | 1.99.5-cdh5.15.0 | |
org.apache.sqoop.connector | sqoop-connector-generic-jdbc | 1.99.5-cdh5.15.0 | |
org.apache.sqoop.connector | sqoop-connector-hdfs | 1.99.5-cdh5.15.0 | |
org.apache.sqoop.connector | sqoop-connector-kafka | 1.99.5-cdh5.15.0 | |
org.apache.sqoop.connector | sqoop-connector-kite | 1.99.5-cdh5.15.0 | |
org.apache.sqoop.execution | sqoop-execution-mapreduce | 1.99.5-cdh5.15.0 | |
org.apache.sqoop.repository | sqoop-repository-common | 1.99.5-cdh5.15.0 | |
org.apache.sqoop.repository | sqoop-repository-derby | 1.99.5-cdh5.15.0 | |
org.apache.sqoop.repository | sqoop-repository-postgresql | 1.99.5-cdh5.15.0 | |
org.apache.sqoop.submission | sqoop-submission-mapreduce | 1.99.5-cdh5.15.0 | |
Apache Sentry | org.apache.sentry | sentry-binding-hbase-indexer | 1.5.1-cdh5.15.0 |
org.apache.sentry | sentry-binding-hive | 1.5.1-cdh5.15.0 | |
org.apache.sentry | sentry-binding-hive-conf | 1.5.1-cdh5.15.0 | |
org.apache.sentry | sentry-binding-hive-follower | 1.5.1-cdh5.15.0 | |
org.apache.sentry | sentry-binding-kafka | 1.5.1-cdh5.15.0 | |
org.apache.sentry | sentry-binding-solr | 1.5.1-cdh5.15.0 | |
org.apache.sentry | sentry-core-common | 1.5.1-cdh5.15.0 | |
org.apache.sentry | sentry-core-model-db | 1.5.1-cdh5.15.0 | |
org.apache.sentry | sentry-core-model-indexer | 1.5.1-cdh5.15.0 | |
org.apache.sentry | sentry-core-model-kafka | 1.5.1-cdh5.15.0 | |
org.apache.sentry | sentry-core-model-search | 1.5.1-cdh5.15.0 | |
org.apache.sentry | sentry-dist | 1.5.1-cdh5.15.0 | |
org.apache.sentry | sentry-hdfs-common | 1.5.1-cdh5.15.0 | |
org.apache.sentry | sentry-hdfs-dist | 1.5.1-cdh5.15.0 | |
org.apache.sentry | sentry-hdfs-namenode-plugin | 1.5.1-cdh5.15.0 | |
org.apache.sentry | sentry-hdfs-service | 1.5.1-cdh5.15.0 | |
org.apache.sentry | sentry-policy-common | 1.5.1-cdh5.15.0 | |
org.apache.sentry | sentry-policy-db | 1.5.1-cdh5.15.0 | |
org.apache.sentry | sentry-policy-indexer | 1.5.1-cdh5.15.0 | |
org.apache.sentry | sentry-policy-kafka | 1.5.1-cdh5.15.0 | |
org.apache.sentry | sentry-policy-search | 1.5.1-cdh5.15.0 | |
org.apache.sentry | sentry-provider-cache | 1.5.1-cdh5.15.0 | |
org.apache.sentry | sentry-provider-common | 1.5.1-cdh5.15.0 | |
org.apache.sentry | sentry-provider-db | 1.5.1-cdh5.15.0 | |
org.apache.sentry | sentry-provider-file | 1.5.1-cdh5.15.0 | |
org.apache.sentry | sentry-tests-hive | 1.5.1-cdh5.15.0 | |
org.apache.sentry | sentry-tests-kafka | 1.5.1-cdh5.15.0 | |
org.apache.sentry | sentry-tests-solr | 1.5.1-cdh5.15.0 | |
org.apache.sentry | solr-sentry-handlers | 1.5.1-cdh5.15.0 | |
Apache Parquet | com.twitter | parquet-avro | 1.5.0-cdh5.15.0 |
com.twitter | parquet-cascading | 1.5.0-cdh5.15.0 | |
com.twitter | parquet-column | 1.5.0-cdh5.15.0 | |
com.twitter | parquet-common | 1.5.0-cdh5.15.0 | |
com.twitter | parquet-encoding | 1.5.0-cdh5.15.0 | |
com.twitter | parquet-format | 2.1.0-cdh5.15.0 | |
com.twitter | parquet-generator | 1.5.0-cdh5.15.0 | |
com.twitter | parquet-hadoop | 1.5.0-cdh5.15.0 | |
com.twitter | parquet-hadoop-bundle | 1.5.0-cdh5.15.0 | |
com.twitter | parquet-jackson | 1.5.0-cdh5.15.0 | |
com.twitter | parquet-pig | 1.5.0-cdh5.15.0 | |
com.twitter | parquet-pig-bundle | 1.5.0-cdh5.15.0 | |
com.twitter | parquet-protobuf | 1.5.0-cdh5.15.0 | |
com.twitter | parquet-scala_2.10 | 1.5.0-cdh5.15.0 | |
com.twitter | parquet-scrooge_2.10 | 1.5.0-cdh5.15.0 | |
com.twitter | parquet-test-hadoop2 | 1.5.0-cdh5.15.0 | |
com.twitter | parquet-thrift | 1.5.0-cdh5.15.0 | |
com.twitter | parquet-tools | 1.5.0-cdh5.15.0 | |
Llama | com.cloudera.llama | llama | 1.0.0-cdh5.15.0 |
Apache Spark | org.apache.spark | spark-assembly_2.10 | 1.6.0-cdh5.15.0 |
org.apache.spark | spark-bagel_2.10 | 1.6.0-cdh5.15.0 | |
org.apache.spark | spark-catalyst_2.10 | 1.6.0-cdh5.15.0 | |
org.apache.spark | spark-core_2.10 | 1.6.0-cdh5.15.0 | |
org.apache.spark | spark-docker-integration-tests_2.10 | 1.6.0-cdh5.15.0 | |
org.apache.spark | spark-graphx_2.10 | 1.6.0-cdh5.15.0 | |
org.apache.spark | spark-hive_2.10 | 1.6.0-cdh5.15.0 | |
org.apache.spark | spark-launcher_2.10 | 1.6.0-cdh5.15.0 | |
org.apache.spark | spark-mllib_2.10 | 1.6.0-cdh5.15.0 | |
org.apache.spark | spark-network-common_2.10 | 1.6.0-cdh5.15.0 | |
org.apache.spark | spark-network-shuffle_2.10 | 1.6.0-cdh5.15.0 | |
org.apache.spark | spark-network-yarn_2.10 | 1.6.0-cdh5.15.0 | |
org.apache.spark | spark-repl_2.10 | 1.6.0-cdh5.15.0 | |
org.apache.spark | spark-sql_2.10 | 1.6.0-cdh5.15.0 | |
org.apache.spark | spark-streaming-flume-sink_2.10 | 1.6.0-cdh5.15.0 | |
org.apache.spark | spark-streaming-flume_2.10 | 1.6.0-cdh5.15.0 | |
org.apache.spark | spark-streaming-kafka_2.10 | 1.6.0-cdh5.15.0 | |
org.apache.spark | spark-streaming-mqtt-assembly_2.10 | 1.6.0-cdh5.15.0 | |
org.apache.spark | spark-streaming-mqtt_2.10 | 1.6.0-cdh5.15.0 | |
org.apache.spark | spark-streaming-twitter_2.10 | 1.6.0-cdh5.15.0 | |
org.apache.spark | spark-streaming-zeromq_2.10 | 1.6.0-cdh5.15.0 | |
org.apache.spark | spark-streaming_2.10 | 1.6.0-cdh5.15.0 | |
org.apache.spark | spark-test-tags_2.10 | 1.6.0-cdh5.15.0 | |
org.apache.spark | spark-unsafe_2.10 | 1.6.0-cdh5.15.0 | |
org.apache.spark | spark-yarn_2.10 | 1.6.0-cdh5.15.0 | |
Apache Crunch | org.apache.crunch | crunch-archetype | 0.11.0-cdh5.15.0 |
org.apache.crunch | crunch-contrib | 0.11.0-cdh5.15.0 | |
org.apache.crunch | crunch-core | 0.11.0-cdh5.15.0 | |
org.apache.crunch | crunch-examples | 0.11.0-cdh5.15.0 | |
org.apache.crunch | crunch-hbase | 0.11.0-cdh5.15.0 | |
org.apache.crunch | crunch-hive | 0.11.0-cdh5.15.0 | |
org.apache.crunch | crunch-scrunch | 0.11.0-cdh5.15.0 | |
org.apache.crunch | crunch-spark | 0.11.0-cdh5.15.0 | |
org.apache.crunch | crunch-test | 0.11.0-cdh5.15.0 | |
Apache Avro | org.apache.avro | avro | 1.7.6-cdh5.15.0 |
org.apache.avro | avro-compiler | 1.7.6-cdh5.15.0 | |
org.apache.avro | avro-ipc | 1.7.6-cdh5.15.0 | |
org.apache.avro | avro-mapred | 1.7.6-cdh5.15.0 | |
org.apache.avro | avro-maven-plugin | 1.7.6-cdh5.15.0 | |
org.apache.avro | avro-protobuf | 1.7.6-cdh5.15.0 | |
org.apache.avro | avro-service-archetype | 1.7.6-cdh5.15.0 | |
org.apache.avro | avro-thrift | 1.7.6-cdh5.15.0 | |
org.apache.avro | avro-tools | 1.7.6-cdh5.15.0 | |
org.apache.avro | trevni-avro | 1.7.6-cdh5.15.0 | |
org.apache.avro | trevni-core | 1.7.6-cdh5.15.0 | |
Kite SDK | org.kitesdk | kite-data-core | 1.0.0-cdh5.15.0 |
org.kitesdk | kite-data-crunch | 1.0.0-cdh5.15.0 | |
org.kitesdk | kite-data-hbase | 1.0.0-cdh5.15.0 | |
org.kitesdk | kite-data-hive | 1.0.0-cdh5.15.0 | |
org.kitesdk | kite-data-mapreduce | 1.0.0-cdh5.15.0 | |
org.kitesdk | kite-data-oozie | 1.0.0-cdh5.15.0 | |
org.kitesdk | kite-data-s3 | 1.0.0-cdh5.15.0 | |
org.kitesdk | kite-data-spark | 1.0.0-cdh5.15.0 | |
org.kitesdk | kite-hadoop-compatibility | 1.0.0-cdh5.15.0 | |
org.kitesdk | kite-maven-plugin | 1.0.0-cdh5.15.0 | |
org.kitesdk | kite-minicluster | 1.0.0-cdh5.15.0 | |
org.kitesdk | kite-morphlines-avro | 1.0.0-cdh5.15.0 | |
org.kitesdk | kite-morphlines-core | 1.0.0-cdh5.15.0 | |
org.kitesdk | kite-morphlines-hadoop-core | 1.0.0-cdh5.15.0 | |
org.kitesdk | kite-morphlines-hadoop-parquet-avro | 1.0.0-cdh5.15.0 | |
org.kitesdk | kite-morphlines-hadoop-rcfile | 1.0.0-cdh5.15.0 | |
org.kitesdk | kite-morphlines-hadoop-sequencefile | 1.0.0-cdh5.15.0 | |
org.kitesdk | kite-morphlines-json | 1.0.0-cdh5.15.0 | |
org.kitesdk | kite-morphlines-maxmind | 1.0.0-cdh5.15.0 | |
org.kitesdk | kite-morphlines-metrics-scalable | 1.0.0-cdh5.15.0 | |
org.kitesdk | kite-morphlines-metrics-servlets | 1.0.0-cdh5.15.0 | |
org.kitesdk | kite-morphlines-protobuf | 1.0.0-cdh5.15.0 | |
org.kitesdk | kite-morphlines-saxon | 1.0.0-cdh5.15.0 | |
org.kitesdk | kite-morphlines-solr-cell | 1.0.0-cdh5.15.0 | |
org.kitesdk | kite-morphlines-solr-core | 1.0.0-cdh5.15.0 | |
org.kitesdk | kite-morphlines-tika-core | 1.0.0-cdh5.15.0 | |
org.kitesdk | kite-morphlines-tika-decompress | 1.0.0-cdh5.15.0 | |
org.kitesdk | kite-morphlines-twitter | 1.0.0-cdh5.15.0 | |
org.kitesdk | kite-morphlines-useragent | 1.0.0-cdh5.15.0 | |
org.kitesdk | kite-tools | 1.0.0-cdh5.15.0 | |
Apache Solr | org.apache.lucene | lucene-analyzers-common | 4.10.3-cdh5.15.0 |
org.apache.lucene | lucene-analyzers-icu | 4.10.3-cdh5.15.0 | |
org.apache.lucene | lucene-analyzers-kuromoji | 4.10.3-cdh5.15.0 | |
org.apache.lucene | lucene-analyzers-morfologik | 4.10.3-cdh5.15.0 | |
org.apache.lucene | lucene-analyzers-phonetic | 4.10.3-cdh5.15.0 | |
org.apache.lucene | lucene-analyzers-smartcn | 4.10.3-cdh5.15.0 | |
org.apache.lucene | lucene-analyzers-stempel | 4.10.3-cdh5.15.0 | |
org.apache.lucene | lucene-analyzers-uima | 4.10.3-cdh5.15.0 | |
org.apache.lucene | lucene-benchmark | 4.10.3-cdh5.15.0 | |
org.apache.lucene | lucene-classification | 4.10.3-cdh5.15.0 | |
org.apache.lucene | lucene-codecs | 4.10.3-cdh5.15.0 | |
org.apache.lucene | lucene-core | 4.10.3-cdh5.15.0 | |
org.apache.lucene | lucene-demo | 4.10.3-cdh5.15.0 | |
org.apache.lucene | lucene-expressions | 4.10.3-cdh5.15.0 | |
org.apache.lucene | lucene-facet | 4.10.3-cdh5.15.0 | |
org.apache.lucene | lucene-grouping | 4.10.3-cdh5.15.0 | |
org.apache.lucene | lucene-highlighter | 4.10.3-cdh5.15.0 | |
org.apache.lucene | lucene-join | 4.10.3-cdh5.15.0 | |
org.apache.lucene | lucene-memory | 4.10.3-cdh5.15.0 | |
org.apache.lucene | lucene-misc | 4.10.3-cdh5.15.0 | |
org.apache.lucene | lucene-queries | 4.10.3-cdh5.15.0 | |
org.apache.lucene | lucene-queryparser | 4.10.3-cdh5.15.0 | |
org.apache.lucene | lucene-replicator | 4.10.3-cdh5.15.0 | |
org.apache.lucene | lucene-sandbox | 4.10.3-cdh5.15.0 | |
org.apache.lucene | lucene-spatial | 4.10.3-cdh5.15.0 | |
org.apache.lucene | lucene-suggest | 4.10.3-cdh5.15.0 | |
org.apache.lucene | lucene-test-framework | 4.10.3-cdh5.15.0 | |
org.apache.solr | solr-analysis-extras | 4.10.3-cdh5.15.0 | |
org.apache.solr | solr-cell | 4.10.3-cdh5.15.0 | |
org.apache.solr | solr-clustering | 4.10.3-cdh5.15.0 | |
org.apache.solr | solr-core | 4.10.3-cdh5.15.0 | |
org.apache.solr | solr-dataimporthandler | 4.10.3-cdh5.15.0 | |
org.apache.solr | solr-dataimporthandler-extras | 4.10.3-cdh5.15.0 | |
org.apache.solr | solr-langid | 4.10.3-cdh5.15.0 | |
org.apache.solr | solr-security-util | 4.10.3-cdh5.15.0 | |
org.apache.solr | solr-solrj | 4.10.3-cdh5.15.0 | |
org.apache.solr | solr-test-framework | 4.10.3-cdh5.15.0 | |
org.apache.solr | solr-uima | 4.10.3-cdh5.15.0 | |
org.apache.solr | solr-velocity | 4.10.3-cdh5.15.0 | |
Cloudera Search | com.cloudera.search | search-crunch | 1.0.0-cdh5.15.0 |
com.cloudera.search | search-mr | 1.0.0-cdh5.15.0 | |
HBase Indexer | com.ngdata | hbase-indexer-all | 1.5-cdh5.15.0 |
com.ngdata | hbase-indexer-cli | 1.5-cdh5.15.0 | |
com.ngdata | hbase-indexer-common | 1.5-cdh5.15.0 | |
com.ngdata | hbase-indexer-demo | 1.5-cdh5.15.0 | |
com.ngdata | hbase-indexer-dist | 1.5-cdh5.15.0 | |
com.ngdata | hbase-indexer-engine | 1.5-cdh5.15.0 | |
com.ngdata | hbase-indexer-model | 1.5-cdh5.15.0 | |
com.ngdata | hbase-indexer-morphlines | 1.5-cdh5.15.0 | |
com.ngdata | hbase-indexer-mr | 1.5-cdh5.15.0 | |
com.ngdata | hbase-indexer-server | 1.5-cdh5.15.0 | |
com.ngdata | hbase-sep-api | 1.5-cdh5.15.0 | |
com.ngdata | hbase-sep-demo | 1.5-cdh5.15.0 | |
com.ngdata | hbase-sep-impl | 1.5-hbase1.2-cdh5.15.0 | |
com.ngdata | hbase-sep-impl-common | 1.5-cdh5.15.0 | |
com.ngdata | hbase-sep-tools | 1.5-cdh5.15.0 | |
Apache Kudu | org.apache.kudu | kudu-client | 1.7.0-cdh5.15.0 |
org.apache.kudu | kudu-client-tools | 1.7.0-cdh5.15.0 | |
org.apache.kudu | kudu-flume-sink | 1.7.0-cdh5.15.0 | |
org.apache.kudu | kudu-mapreduce | 1.7.0-cdh5.15.0 | |
org.apache.kudu | kudu-spark2-tools_2.11 | 1.7.0-cdh5.15.0 | |
org.apache.kudu | kudu-spark2_2.11 | 1.7.0-cdh5.15.0 | |
org.apache.kudu | kudu-spark_2.10 | 1.7.0-cdh5.15.0 |
