分布式处理中,总会存在多个服务节点同时工作,并且节点数量会随着网络规模的变化而动态增减,服务节点也有可能发生宕机与恢复。面对着动态增减的服务节点,我们如何保证客户请求被服务器正确处理呢。我们可以通过zookeeper临时节点创建与自动删除来掌握服务节点的动态增减。
ignite分布式缓存支持使用zookeeper发现ignite节点的增减,这正是zookeeper管理服务节点的一个典型应用场景。我们来看看关键代码
// 关键方法,创建包含自增长id名称的目录,这个方法支持了分布式锁的实现 // 四个参数: // 1、目录名称 2、目录文本信息 // 3、文件夹权限,Ids.OPEN_ACL_UNSAFE表示所有权限 // 4、目录类型,CreateMode.EPHEMERAL_SEQUENTIAL表示创建临时目录,session断开连接则目录自动删除 String createdPath = zk.create( "/" + clusterNode + "/" + serverNode, address.getBytes("utf-8"), Ids.OPEN_ACL_UNSAFE, CreateMode.EPHEMERAL_SEQUENTIAL);
采用CreateMode.EPHEMERAL_SEQUENTIAL模式创建临时节点,可以支持服务节点的实时管理。没错,这个模式和《》中创建有序节点支持分布式共享锁是一致的。EPHEMERAL_SEQUENTIAL表示创建有序的临时目录节点,zookeeper客户端创建临时节点后,只要session断开,该临时节点会自动删除。
所以,服务器在zookeeper上创建一个临时目录节点,通过节点事件监听我们可以知道服务器已经加入到服务网络中,监听到临时目录节点删除事件,我们可以知道该节点对应的服务器已经脱离服务网络。下面我们看看具体代码
1、 服务器启动后在zookeeper创建临时目录
package com.coshaho.learn.zookeeper;import org.apache.zookeeper.CreateMode;import org.apache.zookeeper.WatchedEvent;import org.apache.zookeeper.Watcher;import org.apache.zookeeper.ZooDefs.Ids;import org.apache.zookeeper.ZooKeeper;/** * * 服务节点启动后注册到zookeeper * @author coshaho * */public class AppServer extends Thread{ private String clusterNode = "Locks"; private String serverNode = "mylock"; private String serverName; private long sleepTime; public void run() { try { connectZookeeper(serverName); } catch (Exception e) { e.printStackTrace(); } } public void connectZookeeper(String address) throws Exception { ZooKeeper zk = new ZooKeeper("192.168.1.104:12181", 5000, new Watcher() { public void process(WatchedEvent event) {} }); // 关键方法,创建包含自增长id名称的目录,这个方法支持了分布式锁的实现 // 四个参数: // 1、目录名称 2、目录文本信息 // 3、文件夹权限,Ids.OPEN_ACL_UNSAFE表示所有权限 // 4、目录类型,CreateMode.EPHEMERAL_SEQUENTIAL表示创建临时目录,session断开连接则目录自动删除 String createdPath = zk.create( "/" + clusterNode + "/" + serverNode, address.getBytes("utf-8"), Ids.OPEN_ACL_UNSAFE, CreateMode.EPHEMERAL_SEQUENTIAL); System.out.println("create: " + createdPath); Thread.sleep(sleepTime); } public AppServer(String serverName, long sleepTime) { this.serverName = serverName; this.sleepTime = sleepTime; }}
2、 节点管理服务监听zookeeper临时目录节点创建删除事件
package com.coshaho.learn.zookeeper;import java.util.ArrayList;import java.util.List;import org.apache.zookeeper.WatchedEvent;import org.apache.zookeeper.Watcher;import org.apache.zookeeper.Watcher.Event.EventType;import org.apache.zookeeper.ZooKeeper;/** * * 客户端注册监听server节点变化 * @author coshaho * */public class AppMaster { private String clusterNode = "Locks"; private ZooKeeper zk; private volatile ListserverList; public void connectZookeeper() throws Exception { // 注册全局默认watcher zk = new ZooKeeper("192.168.1.104:12181", 5000, new Watcher() { public void process(WatchedEvent event) { if (event.getType() == EventType.NodeChildrenChanged && ("/" + clusterNode).equals(event.getPath())) { try { updateServerList(); } catch (Exception e) { e.printStackTrace(); } } } }); updateServerList(); } private void updateServerList() throws Exception { List newServerList = new ArrayList (); // watcher注册后,只能监听事件一次,参数true表示继续使用默认watcher监听事件 List subList = zk.getChildren("/" + clusterNode, true); for (String subNode : subList) { // 获取节点数据 byte[] data = zk.getData("/" + clusterNode + "/" + subNode, false, null); newServerList.add(new String(data, "utf-8")); } serverList = newServerList; System.out.println("server list updated: " + serverList); } public static void main(String[] args) throws Exception { AppMaster ac = new AppMaster(); ac.connectZookeeper(); Thread.sleep(Long.MAX_VALUE); }}
3、 启动两个服务器
package com.coshaho.learn.zookeeper;public class Server1 { public static void main(String[] args) throws Exception { AppServer server1 = new AppServer("Server1", 5000); server1.start(); }}package com.coshaho.learn.zookeeper;public class Server2 { public static void main(String[] args) throws Exception { AppServer server1 = new AppServer("Server2", 10000); server1.start(); }}
4、 运行结果
log4j:WARN No appenders could be found for logger (org.apache.zookeeper.ZooKeeper).log4j:WARN Please initialize the log4j system properly.log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.server list updated: []server list updated: [Server1]server list updated: [Server2, Server1]server list updated: [Server2]server list updated: []