|
接上篇http://www.iteye.com/topic/994833,
我们看到JioEndPoint的start方法有下面一段代码:
// Create worker collection
if (executor == null) {
workers = new WorkerStack(maxThreads);
}
在上一篇中,executor一直都为null。什么时候不为空呢,这里因为Server.xml文件里的Connector元素还有一个executor属性,它指向一个Executor属性能名字。(参考:
http://tomcat.apache.org/tomcat-6.0-doc/config/http.html)。
在连接器上方有一个默认的Executor元素:
不过当前他是注释掉的,我们把他打开。
这个Executor默认的类是org.apache.catalina.core. StandardThreadExecutor,
当然你可以通过它的className属性使用自己的类,但必须像StandardThreadExecutor
一样实现Executor接口。
StandardThreadExecutor是在Doug lea大爷的ThreadPoolExecutor类构造出来的:
public void start() throws LifecycleException {
lifecycle.fireLifecycleEvent(BEFORE_START_EVENT, null);
TaskQueue taskqueue = new TaskQueue();
TaskThreadFactory tf = new TaskThreadFactory(namePrefix);
lifecycle.fireLifecycleEvent(START_EVENT, null);
executor = new ThreadPoolExecutor(getMinSpareThreads(), getMaxThreads(), maxIdleTime, TimeUnit.MILLISECONDS,taskqueue, tf) {
@Override
protected void afterExecute(Runnable r, Throwable t) {
AtomicInteger atomic = submittedTasksCount;
if(atomic!=null) {
atomic.decrementAndGet();
}
}
};
taskqueue.setParent( (ThreadPoolExecutor) executor);
submittedTasksCount = new AtomicInteger();
lifecycle.fireLifecycleEvent(AFTER_START_EVENT, null);
}
剩下的工作就是在Connector里添加属性executor,来引用tomcatThreadPool。
这样executor就横空出世。
那么开始那段代码就不需要WorkerStack了。Tomcat自个搞了个WorkerStack出来,
还是给Doug lea大爷一个很大的面子哦。
这样processSocket不需大费周折,直接交给executor去执行就OK了:
protected boolean processSocket(Socket socket) {
try {
if (executor == null) {
getWorkerThread().assign(socket);
} else {
executor.execute(new SocketProcessor(socket));
}
} catch (Throwable t) {
// This means we got an OOM or similar creating a thread, or that
// the pool and its queue are full
log.error(sm.getString("endpoint.process.fail"), t);
return false;
}
return true;
}
上一篇说讨论BIO模式的连接器,无外乎创建ServerSocket -> 绑定(Bind)端口
->接受(accept)连接->取出一个线程处理Socket 的过程。
我们继续讨论NIO模型的连接器,首先更改连接器协议为org.apache.coyote.http11. Http11NioProtocol:
很简单,这样就启动了NIO模型。
与BIO模型一致,Http11NioProtocol在init方法里初始化NioEndpoint,我们讨论BIO模型时提到JioEndPoint,以后还会提到AprEndpoint。同样后续工作主要由NioEndpoint来完成。
首先看NioEndpoint的init方法:
/**
* Initialize the endpoint.
*/
public void init()
throws Exception {
if (initialized)
return;
serverSock = ServerSocketChannel.open();
serverSock.socket().setPerformancePreferences(socketProperties.getPerformanceConnectionTime(),
socketProperties.getPerformanceLatency(),
socketProperties.getPerformanceBandwidth());
InetSocketAddress addr = (address!=null?new InetSocketAddress(address,port):new InetSocketAddress(port));
serverSock.socket().bind(addr,backlog);
serverSock.configureBlocking(true); //mimic APR behavior
serverSock.socket().setSoTimeout(getSocketProperties().getSoTimeout());
// Initialize thread count defaults for acceptor, poller
if (acceptorThreadCount == 0) {
// FIXME: Doesn't seem to work that well with multiple accept threads
acceptorThreadCount = 1;
}
if (pollerThreadCount 0) reclaimParachute(true);
selectorPool.open();
initialized = true;
}
首先创建ServerSocketChannel,绑定监听端口。这里有一点不同之外,在NIO模式下,我们传统的方式像这样建立服务监听:
serverSock = ServerSocketChannel.open();
serverSock.socket().bind(new InetSocketAddress(8888));
Selector selector = Selector.open();
serverSock.configureBlocking(false);
serverSock.register(selector, SelectionKey.OP_ACCEPT);
while (selector.select() > 0) {
//处理连接请求
}
但Tomcat采用了Blocking模式接收连接请求,在读写的时候采用No-Blocking模式。这种做法和weblogic的做法一致(我反编译看到的),像MINA等开源框架还是采用传统的方式。
我想这里有两个解释,一是接收线程的工作就是等待新的连接,没其他事可以,用No-Blocking已没有意义,另外selector.select()在一些OS上还会出现CPU 100%的空转现象,如果有其他见解,请告知我,在此感谢。
代码行设定了两个变量:acceptorThreadCount,pollerThreadCount。 前者是acceptor线程的个数,后者是读写线程的个数。这个模型是Dong Lea在《Scalable IO in Java》中的“Using Multiple Reactors”。acceptorThreadCount一般设定为1,而pollerThreadCount Tomcat给的默认值为CPU个数(Runtime.getRuntime().availableProcessors());
初始化完毕,我们接着看start方法:
public void start()
throws Exception {
// Initialize socket if not done before
if (!initialized) {
init();
}
if (!running) {
running = true;
paused = false;
// Create worker collection
if (getUseExecutor()) {
if ( executor == null ) {
TaskQueue taskqueue = new TaskQueue();
TaskThreadFactory tf = new TaskThreadFactory(getName() + "-exec-");
executor = new ThreadPoolExecutor(getMinSpareThreads(), getMaxThreads(), 60, TimeUnit.SECONDS,taskqueue, tf);
taskqueue.setParent( (ThreadPoolExecutor) executor, this);
}
} else if ( executor == null ) {//avoid two thread pools being created
workers = new WorkerStack(maxThreads);
}
// Start poller threads
pollers = new Poller[getPollerThreadCount()];
for (int i=0; i |
|