WebGL绘制有宽度的线
2020-02-10

  WebGL中有宽度的线一直都是初学者的一道门槛,因为在windows系统中底层的渲染接口都是D3D提供的,所以无论你的lineWidth设置为多少,最终绘制出来的只有一像素。即使在移动端可以设置有宽度的线,但是在拐弯处原生api没有做任何处理,所以往往达不到项目需求,再者比如对于虚线、导航线的绘制,原生api是无能为力。差不多从事WebGL开发已经一周年,总结一下绘制线的方法和踩过的坑,聊以慰藉后来者。

宽度线绘制原理

  宽度线的绘制最核心的思想就是利用三角形来绘制线,将一根有宽度的线,看成是多个三角形的拼接

  将线剖分成三角形的过程是一个计算密集型的过程,如果放在主线程中会阻塞渲染造成卡顿,通常来讲都是放到顶点着色器中来处理,利用GPU并行计算来处理。通常来着色中,将顶点沿着法线方向平移lineWidth/2的距离。对于一个顶点只能平移一次,所以在cpu中我们需要把一个顶点复制两份传给gpu同时提前确定好剖分出来的三角形的顶点索引顺序。

  对于拐弯处,需要做一系列的计算来确定拐角的距离,比如:

  但这幅图过于复杂,我比较喜欢下面这个比较简单的图

  假设dir1为向量last->current的单位向量,dir2为向量current->next的单位向量,根据这两个向量求出avg向量,avg向量 = normalize(dir1 + dir2);将avg向量旋转九十度即可求出在拐角处的偏移向量,当然这个向量可向下,也可以向上,所以一般对上文中重复的顶点还有对应的一个side变量,来告诉着色器应该向下还是向上偏移,同样上面图中的last和next也要传入对应上一个和下一个顶点的坐标值。对应的着色器代码:

// ios11下直接使用==判断会有精度问题导致两个数字不相同引出bug" if( abs(nextP.x - currentP.x)<=0.000001 && abs(nextP.y - currentP.y)<=0.000001) dir = normalize( currentP - prevP );"," else if( abs(prevP.x - currentP.x)<=0.000001 && abs(prevP.y - currentP.y) <=0.000001) dir = normalize( nextP - currentP );",// " if( nextP.x == currentP.x && nextP.y == currentP.y) dir = normalize( currentP - prevP );",// " else if( prevP.x == currentP.x && prevP.y == currentP.y ) dir = normalize( nextP - currentP );"," else {"," vec2 dir1 = normalize( currentP - prevP );"," vec2 dir2 = normalize( nextP - currentP );"," dir = normalize( dir1 + dir2 );","",""," }",""," vec2 normal = vec2( -dir.y, dir.x );",

着色器中的实践

  原理上面已经实现,那么在具体的绘制中,我们还要明白一个问题,lineWidth的单位是什么,如果你需要绘制的是以像素为单位,那么我们就需要将3d坐标映射到屏幕坐标来进行计算,这样绘制出来的线不会有明显的透视效果,即不会受相机距离远近的影响。

  我们需要几个函数来帮忙,第一个是transform函数,用来将3D坐标转换成透视坐标系下的坐标:

"vec4 transform(vec3 coord) {"," return projectionMatrix * modelViewMatrix * vec4(coord, 1.0);","}",

  

  接下来是project函数,这个函数传入的是透视坐标,也就是经过transform函数返回的坐标;

"vec2 project(vec4 device) {"," vec3 device_normal = device.xyz / device.w;"," vec2 clip_pos = (device_normal * 0.5 + 0.5).xy;"," return clip_pos * resolution;","}",

  其中第一步device.xyz / device.w将坐标转化成ndc坐标系下的坐标,这个坐标下,xyz的范围全部都是-1~1之间。

  第二步device_normal * 0.5后所有坐标的取值范围在-0.5~0.5之间,后面在加上0.5后坐标范围变为0~1之间,由于我们绘线在屏幕空间,所以z值无用可以丢弃,这里我们只取xy坐标。

  第三部resolution是一个vec2类型,代表最终展示canvas的宽高。将clip_pos * resolution完全转化成屏幕坐标,这时候x取值范围在0~width之间,y取值范围在0~height之间,单位像素。

  

  接下来的unproject函数,这个函数的作用是当我们在屏幕空间中计算好最终顶点位置后,将该屏幕坐标重新转化成透视空间下的坐标。是project的逆向过程。

"vec4 unproject(vec2 screen, float z, float w) {"," vec2 clip_pos = screen / resolution;"," vec2 device_normal = clip_pos * 2.0 - 1.0;"," return vec4(device_normal * w, z, w);","}",

  由于屏幕空间的坐标没有z值和w值,所以需要外界传入。

   最终着色器代码:

var vertexShaderSource = ["precision highp float;","","attribute vec3 position;","attribute vec3 previous;","attribute vec3 next;","attribute float side;","attribute float width;","attribute vec2 uv;","attribute float counters;","","uniform mat4 projectionMatrix;","uniform mat4 modelViewMatrix;","uniform vec2 resolution;","uniform float lineWidth;","uniform vec3 color;","uniform float opacity;","uniform float near;","uniform float far;","uniform float sizeAttenuation;","uniform float forBorder;","","varying vec2 vUV;","varying vec4 vColor;","varying float vCounters;","","vec4 transform(vec3 coord) {"," return projectionMatrix * modelViewMatrix * vec4(coord, 1.0);","}","","vec2 project(vec4 device) {"," vec3 device_normal = device.xyz / device.w;"," vec2 clip_pos = (device_normal * 0.5 + 0.5).xy;"," return clip_pos * resolution;","}","","vec4 unproject(vec2 screen, float z, float w) {"," vec2 clip_pos = screen / resolution;"," vec2 device_normal = clip_pos * 2.0 - 1.0;"," return vec4(device_normal * w, z, w);","}","","void main() {",""," float aspect = resolution.x / resolution.y;", // 屏幕宽高比" float pixelWidthRatio = 1. / (resolution.x * projectionMatrix[0][0]);", // (r-l)/(2n*Width)""," vColor = vec4( color, opacity );"," vUV = uv;",""," vec4 finalPosition = transform(position);"," vec4 prevPos = transform(previous);"," vec4 nextPos = transform(next);",""," vec2 currentP = project(finalPosition);"," vec2 prevP = project(prevPos);"," vec2 nextP = project(nextPos);","",""," vec2 dir;",// ios11下直接使用==判断会有精度问题导致两个数字不相同引出bug" if( abs(nextP.x - currentP.x)<=0.000001 && abs(nextP.y - currentP.y)<=0.000001) dir = normalize( currentP - prevP );"," else if( abs(prevP.x - currentP.x)<=0.000001 && abs(prevP.y - currentP.y) <=0.000001) dir = normalize( nextP - currentP );",// " if( nextP.x == currentP.x && nextP.y == currentP.y) dir = normalize( currentP - prevP );",// " else if( prevP.x == currentP.x && prevP.y == currentP.y ) dir = normalize( nextP - currentP );"," else {"," vec2 dir1 = normalize( currentP - prevP );"," vec2 dir2 = normalize( nextP - currentP );"," dir = normalize( dir1 + dir2 );","",""," }",""," vec2 normal = vec2( -dir.y, dir.x );",// " normal.x /= aspect;",// " normal *= .5 * w;",""," float realSide = forBorder > 0.0 ? (side < 0.0 ? side : 0.0) : side;"," vec2 pos = currentP + normal * lineWidth * realSide * 0.5;",// " vec4 offset = vec4( normal * realSide, 0.0, 1.0 );",// ndc空间上做偏移// " finalPosition.xy += offset.xy;","",// " gl_Position = finalPosition;"," gl_Position = unproject(pos, finalPosition.z, finalPosition.w);","","}" ];View Code

  

虚线以及箭头的绘制原理

  上面介绍了有宽度线的绘制,但是在一些地图场景中,往往需要绘制虚线、地铁线以及导航路线等有一定规则的路线。这里主要介绍导航线的绘制,明白这个后虚线以及地铁的线绘制就很简单了。首先介绍一下导航线的核心原理,要绘制导航线我们有几个问题需要解决,比如:

箭头的间隔一个箭头应该绘制在几米的范围内(范围计算不准图片会失真)如何让线区域范围内的每个像素取的纹理重对应像素以及一些各个机型上兼容性问题

  无论是虚线、地铁线、导航线都可以用这个图来表达。我们可以规定每个markerDelta米在halfd(halfd = markerDelta/2)到uvDelta长的距离里绘制一个标识(虚线的空白区域,地铁线的黑色区域、导航线的箭头)。那么问题来了如何让每一个像素都清楚的知道自己应该成为线的哪一部分?这个时候我的方案是求出每个顶点距离起始坐标点的 ~距离/路线总长度~,将这个距离存入纹理坐标中,利用纹理坐标的插值保证每个像素都能均匀的知道自己的长度占比;在着色器中乘以路线总长度,算出这个像素距离起始点距离uvx。uvx对markerDelta取模运算得muvx,求出在本间隔中的长度,在根据规则(if(muvx >= halfd && muvx <= halfd + uvDelta))计算这个像素是否在uvDelta中。对于导航线,我们需要从箭头图片的纹理中取纹素,所以该像素对应的真正的纹理坐标是float s = (muvx - halfd) / uvDelta;对应着色器代码为

float uvx = vUV.x * repeat.x;"," float muvx = mod(uvx, markerDelta);"," float halfd = markerDelta / 2.0;"," if(muvx >= halfd && muvx <= halfd + uvDelta) {"," float s = (muvx - halfd) / uvDelta;"," tc = texture2D( map, vec2(s, vUV.y));"," c.xyzw = tc.w >= 0.5 ? tc.xyzw : c.xyzw;"," }",

  最终完整着色器代码为:

var fragmentShaderSource = [ "#extension GL_OES_standard_derivatives : enable","precision highp float;","","uniform sampler2D map;","uniform sampler2D alphaMap;","uniform float useMap;","uniform float useAlphaMap;","uniform float useDash;","uniform vec2 dashArray;","uniform float visibility;","uniform float alphaTest;","uniform vec2 repeat;","uniform float uvDelta;", // 代表应该绘制箭头的区域"uniform float markerDelta;","uniform vec3 borderColor;","","varying vec2 vUV;","varying vec4 vColor;","varying float vCounters;","","void main() {",""," vec4 c = vColor;"," if (useMap > 0.0) {"," vec4 tc = vec4(1.0, 1.0, 1.0, 0.0);",//mod(vUV.x * repeat.x, 10.0) >= 0.0 ||" float uvx = vUV.x * repeat.x;"," float muvx = mod(uvx, markerDelta);"," float halfd = markerDelta / 2.0;"," if(muvx >= halfd && muvx <= halfd + uvDelta) {"," float s = (muvx - halfd) / uvDelta;"," tc = texture2D( map, vec2(s, vUV.y));"," c.xyzw = tc.w >= 0.5 ? tc.xyzw : c.xyzw;"," }"," }",// " if( c.a < alphaTest ) c.a = 0.0;"," gl_FragColor = c;",// " gl_FragColor.a *= step(vCounters,visibility);","}" ];View Code

  关于markerDelta和uvDelta来说,则需要跟相机距离、纹理图片性质等因素来综合计算,比如在我的项目中的计算法方式:

let meterPerPixel = this._getPixelMeterRatio(); let radio = meterPerPixel / 0.0746455; // 当前比例尺与21级比例尺相比 let mDelta = Math.min(30, Math.max(radio * 10, 1)); // 最大间隔为10米 let uvDelta = 8 * meterPerPixel;// 8是经验值,实际要根据线实际像素宽度、纹理图片宽高比来计算 uvDelta = /*isIOSPlatform() ? 8 * meterPerPixel : */parseFloat(uvDelta.toFixed(2)); this.routes.forEach(r => { if (r._isVirtual) { return; } r._material.uniforms.uvDelta = {type: "f", value: uvDelta};// 暂时取一米 r._material.uniforms.markerDelta = {type: "f", value: mDelta}; });

  另一个问题如何绘制有边框的线,可以在着色器中来控制,比如设定一个阈值,超过这个阈值的就绘制成border的颜色;或者简单点也可以把一条线绘制两遍,宽的使用border的颜色,窄的使用主线的颜色,同时控制两条线的绘制顺序,让主线压住border线。

兼容性的坑

  首先发现在iphone6p 10.3.3中纹理失真;

  纹理失真肯定是设备像素与纹理纹素没有对应,但是为什么没有对应呢?纹理失真就是uv方向上对应问题,为了排查这个过程我把只要落在纹理区域的范围都设置成红色,发现在纵向方向上不管纹理在什么尺度下红色区域范围都是一样的,而且结合图片发现纵向上基本覆盖了整个纹理图片,所以纵向没有问题。

 

   那么就是横向上的取值,问题,但是横向是通过纹理坐标产生的,没有计算的内容;最后怀疑到数字精度问题;将其中的mediump改成highp;这个问题得到解决;iphone6上能画出完美的箭头

"precision mediump float;",

 

 

   然而又碰到了另一个非常棘手的问题,iphone7以上的设备箭头周围有碎点。。。

  首先要搞清楚这些碎点是什么,发现不论换那张图片都有碎点,一开始我以为这些碎点是纹理坐标计算时的精度问题,后来发现不论怎么调整纹理u的取值范围都无法做到在任何时刻完全避免这个问题。

  最后偶然发现改变一下这个等式就能解决问题。

 

  所以肯定这个些碎点肯定是从纹理中取得的,有可能在这个区域内,Linear过滤模式刚好取得了几个像素的平均值,导致这里的alpha通道非是0.0同时取到了一定的平均颜色才会显示这些碎点;最后怀疑这是因为mipmap方式导致这个设备像素刚好落到前后两章图片的像素上,综合差值后得到一个碎点;至于是否是跟mipmap有关还需要后续验证,由于项目时间关系先往下解决。解决完这个问题已经是凌晨四点多

 

  然而又出现了另一个问题,iphone6下在某些角度下,纹理会消失,发现是因为上面的判断引起的

  将阈值范围改成能够解决问题,后续这块需要梳理一下,作为一个外部可传入的变量来处理

 

参考文章

http://codeflow.org/entries/2012/aug/05/webgl-rendering-of-solid-trails/

https://forum.libcinder.org/topic/smooth-thick-lines-using-geometry-shader

Drawing Antialiased Lines with OpenGLhttps://www.mapbox.com/blog/drawing-antialiased-lines/

Smooth thick lines using geometry shader

Drawing Lines is Hard

 

, 1, 0, 9);