Expresii 寫意
  • Home
  • Showcase
    • Moxi Paint Engine
    • Yibi Brush Engine
    • A New Way to Paint
    • youji Rendering Engine
  • Download
    • Get / Buy Expresii >
      • Update Log
    • Sample Artwork Files
    • Third-Party
  • Support
    • Documentation >
      • End User License Agreement
      • Privacy Policy
    • Video Tutorials
    • User Forum
    • FAQ
    • Feedback
    • Help Translate
  • About Us
    • Contact Us
    • Media Resources
    • Privacy
  • Blog
  • 主頁
  • 示範
  • 網誌
  • 下載
  • 關於我們
    • 聯絡我們

Hong Kong Illustration and Creative Show 2022 香港插畫及文創展

28/11/2022

Comments

 
We're very happy to see our Expresii app being paired with Wacom's hardware at the booth of CG Live ​ at the Hong Kong Illustration And Creative Show  (HKICS)  2022 held on 26-27th Nov 2022.  This two-day event was short but packed with visitors. The HKICS has quickly become a major event for creatives held in Hong Kong.

日前我們因CGLive 有幸參與香港插畫及文創展2022,見到很多對繪畫創作有興趣的人士,超熱鬧der   ~ 這展雖然歷史不長,但很快成為了香港文創界的盛事!
We thank CG Live for having Expresii in some of the demo machines there in their booth. They got Wacom + Expresii bundle promotion  for this event.    Sensei   Shuen  also attracted viewers when she demo'ed using Expresii to paint on the latest Wacom Cintiq Pro 27.

感謝 CG Live 安裝Expresii 給參展人士試玩!他們還給了會場限定 Wacom + Expresii 組合優惠呢。璇老師除了下面說的入門班外,還在CG Live 攤位示範使用Expresii,也吸引了觀眾駐足觀看!
Sensei Shuen's   demo on the Wacom Cintiq Pro 27  大家看看璇老師的數碼水墨示範:

【Wacom Classroom 教室】之 Ink Painting Class​ 電腦水墨課

Wacom has been sponsoring the Show and this time, they also sponsored the 'Wacom Classroom‘ as one of the activities in the Show. We were glad to have Sensei Shuen  teach two introductory classes on using Expresii to do digital ink paintings here.  Sensei   Shuen has been using Expresii for digital ink painting since very early days of Expresii. Before working for an animation studio from 2017 to 2020, Shuen had been teaching kung-fu and extracurricular painting to   school   children , as well as doll making and manga making to adults.  We hope all the students enjoyed the class.

 Wacom 近年都有贊助這文創展,這次也包括贊助了‘Wacom  教室’活動。我們也有幸在這活動請到璇老師來教兩堂入門電腦水墨課。璇老師使用Expresii  軟件非常熟練,她由Expresii一開始beta 已經在用。老師的教學經驗也豐富:2017-2020  為一家動畫公司服務時,有負責教junior    animator 使用Expresii 作畫;在此之前,老師也有教小孩打功夫,教小學生課外繪畫班,和教人偶製作與漫畫製作。璇老師教得很用心,希望所有學員都玩得開心~
A big thanks to CG Live for having us at the booth and also organizing the Wacom class!   We also thank the organizer HKHands for organizing this show! Hope we can joint more forces to promote digital painting and calligraphy!

再次感謝CG Live    讓我們參與這次本地文創界盛事。也感謝HKHands  組織了這文創展!希望多點有心人,可以合力讓電腦書法跟水墨普及。一路走來不容易,請大家多多支持我們!
Comments

Vexel: Marrying Pixels with Vector for organic Digital Painting

30/10/2021

Comments

 
Expresii's novel Vexel Rendering makes pixels zoom like vector!    (」゜ロ゜)」
Picture
Left: Naïve blending of raw image pixels with paper texture. Right: Expresii's vexel rendering at 100x zoom.
Imported watercolor image displayed at 20x zoom with Expresii's vexel rendering

Introduction - Digital Illustrator's Dilemma: ​Raster or Vector?

If you as a digital painter ever needed to print your artwork large, you know how important it is for your artwork to be in high resolution.  For over 40 years, digital paint programs have mostly  been raster-based and they treat pixels as squares - if you zoom into your artwork, you see big, fat pixels.  If you don't want to see fat pixels or interpolation blur, the current solution is to use vector-based programs instead.
Picture
Most paint apps render artwork as square pixels when zoomed in
Picture
A few paint apps use bilinear interpolation
The following is a real-world example of having insufficient resolution. It is obvious that the artist is trying to mimic watercolor in this painting, which was printed as a mural.  It looks fine from a far, but when you walk close, you will notice the blurry interpolation.

Trying to get the best of both worlds?

 Over the years, there have been several attempts to solve the dilemma of raster vs vector.​ One of them is Creature House Expression (first released in 1996; discontinued 2003), in which they allow mapping bitmaps onto vector strokes. Because those bitmaps are resolution-limited, such strokes would be blurry when rendered large. When largely deformed, stretched bitmaps also give unnatural look. 

Mischief (released 2013 ; discontinued 2019) claimed to provide you “the richness of pixel-based brushes AND the scalability of vectors”. Mischief is a vector program, but the vector strokes cannot be edited, at least in all the released incarnations.  Like other pure-vector programs, it's not easy to get painterly without having too many vector strokes, which may bog down the system, and the raster look was achieved mainly using their airbrush-like brush that gives  a dithered falloff effect or even just stacking semi-transparent strokes.   

Concepts (released 2012 ) is another vector drawing app, which also supports mapping   pixel-stamps to vector paths, in the same vein as Expression's skeletal strokes.  Strokes made in Concepts are editable (in the paid version). With their  basic tools like the airbrush (similar to that of Mischief),  it's possible to achieve what Mischief can do, namely smooth shades that give a traditional raster painting look.   Further with   pixel-stamp-mapped strokes,    raster richness can also be achieved. However, such pixel stamp strokes still give interpolation blur  when zoomed in.
Picture
Pixel-stamped 'Pencil', generated 'Airbrush' and pure-vector 'Pen' strokes in app Concepts at 16x zoom
PictureMingling vector & raster strokes in Adobe Fresco
Adobe's latest paint program Fresco (released 2019) allows both pixels and vectors in the same artwork (code-named 'Gemini' for 'the combination of pixels and vectors in a single app'). However, putting the two types of strokes together would make the whole artwork look  incohesive when you zoom in on the image, with vector strokes being very sharp and raster strokes pixelated. The effective resolution is thus limited by that of the raster strokes. Their previous app Adobe Eazel (released 2011; discontinued 2015?) generated thousands of semi-transparent polygons  to mimic raster richness but the result doesn't look nice due to the smooth, clean curves being far from organic. 

Affinity Designer  (released 2014) also allows both pixel and vectors in the same artwork.   They provide a better linkage between  the  vector and the raster personas by allowing clipping of raster strokes  onto crisp vector shapes for texture  or grain. They also allow  pixel-mapped vector strokes. Whenever bitmap is involved, strokes would still pixelate or blur when zoomed in.

HeavyPaint  (released 2019) saves your artwork as stroke information and can re-generate the painting at  a higher resolution, typically at 2x or 4x.  Stroke paths being vector, this can be considered a hybrid raster-vector approach.   In fact, we can regard all the vector-based apps to be a raster app that redraws the visible strokes when the user edits them  or changes the view. Such apps need to be highly optimized  to allow a large number of strokes and stay responsive.

To the best of our knowledge,  none of the apps out there can really give raster richness   and   ultra-high-res output at the same time.

Previous Academic Work

Trying to improve the  rendering quality of magnified  bitmaps is surely not new. Here, let's compare our novel vexel rendering technique with other related methods that involve raster-to-vector conversion. 
Pixel-Art-Specific
These stem from the need to render old video games from the 80's and early 90's on modern hardware.  One notable work on this is the SIGGRAPH 2011 paper  by   Kopf and Lischinski. Very nice results are obtained, but the global nature of the algorithm makes it hard to be implemented on the GPU.  Two years later  Silva et al.  published an algorithm on GPU to give real-time performance:
​These methods are specifically designed for very low-res images using heuristics on connected pixels and thus not suitable for digital painting at common canvas sizes.
Embedding  Explicit Edges
Around 2004-2005, we saw several papers dealing with rendering discontinuity in sampled images. They include the Bixel    (2004),  the Silhouette Maps (2004), the Feature-based Textures  (2004) and the Pinchmaps  (2005).
Picture
The Silhouette Map [Sen 2004] encodes edge information into image data
Picture
The Pinchmap [Tarini and Cignoni 2005] shifts texture coordinates so that the interpolation across edges are 'pinched' to give sharp edges
​The main insight here is that the usual image interpolations (bilinear or bicubic) are bad for discontinuity. All these methods attempt to preserve sharp edges by encoding  boundary information in the images.  Earlier methods are not very fast because edge info derivation and utilization are complex. The later Pinchmap improves the run-time performance by removing the need to divide the situation in cases and thus can be rendered on the GPU very efficiently.  However, time-consuming pre-processing is still required to derive the 'pinching' configuration and thus is still not suitable for real-time painting applications.    
​Implicit-Function-Based
Ray et al. 2005's Vector Texture Map was one early use of implicit function values to define discontinuity in glyphs to avoid evaluation of explicit   curves on the GPU. ​Qin et al. 2006 later used Signed Distance Function (SDF) as the implicit function. These functions are evaluated on the fly and in a hierarchical manner for robustness. In 2007, Chris Green of VALVE showed us how they used SDF for rendering glyphs in a game environment. In such an application, they did not strive to render the glyphs exactly (like having sharp corners) so they can just use rasterized SDF values in a much simplified way, which runs well even on low-end graphics hardware.   Note that all these work well only if you are rendering binary-masked glyphs.
Picture
[Green 2007] uses SDF values stored as texture to rendering vector 'line art' images.
​​SIGGRAPH 2010 paper Vector Solid Textures   uses SDF to generate solid texture, but the encoding requires complex pre-processing and is still limited by the non-overlapping-region requirement.  To   allow   overlapping regions, more SDF are needed thus more storage and processing. To our knowledge, no one has found a way to use implicit function to represent full-color overlapping regions efficiently yet.

Our solution: Vexel Rendering

Is it actually possible to have raster richness and be able to scale it much larger?  I think we found a solution, at least for the case of organic, natural-media digital painting.  Specifically, our goal is to render our artwork as if done on real paper*.   
​
We perform such magic using a shader program.    A shader is a  program that runs on the GPU and   calculates the final image output. Your image data may be stored in an array, but not in the traditional 'pixel' sense. Each slot stores color  or other attributes and it's then up to the shader on how to render the final image  from such information.   In a way, you  can consider the traditional pixels as your final image discretized and most other paint programs display them  as colored squares tiled to give the final image.   Our shader, on the other hand, takes the stored information and generates the final image.   Our vexel rendering works well for our paint simulation output, ordinary   raster illustrations, and photos of watercolor artwork. For examples, the following video  taking a   sample illustration  from   irasutoya  as input and we show that you can still add our simulated strokes to it. The first demo image in this article is a photo of real watercolor marks as input.

* We will try to extend our program to render other paint media like oil paint later.

Raster Richness

Our paint information is stored in an array allowing raster richness. In fact, our Expresii simulates watercolor/ink flow giving rich, organic outcome unmatched by any other app. On top of that, we render the artwork  as if it was done on a real piece of paper, showing details up to paper-fiber level. Due to its raster nature, you can add infinite number of strokes to the page without bogging down the system like in conventional vector programs.   Raster operations like blurring or smudging are possible. Such operations would  be difficult in a pure-vector program.

Vector Scalability

Because the outcome is generated, our final output is flexible on its resolution. Our  current maximum output resolution  is limited by our raster paper texture, which becomes blurry if we go beyond 30x zoom. Given new tools like Material Maker, we can design substrate textures using shaders so it's possible that in future version of Expresii, we can replace the paper texture with one that is resolution independent.
Comparison between raw pixels and  vexel rendering at 50x zoom
​At moderate zoom like 20x, our rendered image looks like real non-volumetric paint marks on paper. By varying the shading from paper texture, we can emulate watercolor/ink soaked into the paper fibers, or crayon/pencil marks laid on the paper surface. At very high  zoom like 100x, our rendering reveals curved shapes (see the first image of this article), which look like   paper cutouts.  Our vexel rendering can smoothly transit from the look of ordinary raster image to shaded vector shapes  that integrate with the paper substrate very nicely.

Tailor-made for our need

Picture
Comparison of a raster illustration (Center), its hand-traced vector (Left) and Expresii rendering (Right) at 40x zoom
As shown in the above image, traditionally vectorized images are too clean, too sterile.  Current auto-trace results tend to  give you simple geometric shapes (like the left part of the above figure), which are not a good representation of textural details like  those from watercolor marks. Traditional raster digital painting are good at capturing details but they usually cannot be as large as 16k x 16k pixels, unless your system is beefy enough and that your app does support such a large canvas (popular iPad app Procreate's max size is 8k x 8k or equivalent, as of Oct 2021). In comparison, our current Youji 2.0  rendering engine can output very nice textural details up to around 40x magnification, and  shaded vector shapes  at 500x. We also don't need any model training like in those AI-based approaches that give you a 2x to 8x magnification at non-interactive rate. Everything is local and instant.
 For organic digital painting, we actually prefer our vexel rendering over the simple curves resultant from image tracers  of existing software tools, which  give a flat, planar look.   You can tune your image tracer to output more polygons like in the above figure but it is still not easy to get as detailed as our vexel rendering. 
A Paradigm shift
Existing vector programs are great for creating graphics comprised of clean lines or shapes, and if you are after such graphics, by all means you should use those tools. On the other hand for digital painting, one major goal among many paint programs is to give a   natural-media look.  It seems we are stuck with the thought that vector  primitive should be simple, clean curves and thus results from current raster-to-vector conversions still look bland.   Here, we render the raster data as 'vexels' that allow huge magnification while keeping all textural details.

Conclusion 

Expresii is the first digital painting   app that can really give both raster richness and ultra-high-res output scaling,  thanks to its novel GPU-based simulation and rendering algorithms.   Many artists used to traditional media do not like digital counterparts because digital paintings get pixelated when zoomed in on. We believe Expresii has largely fixed this issue and hope more artists are willing to go digital.  Our vexel rendering is not a general solution to the problem of image interpolation, since our method rely on the fact that for many natural media there is a substrate to give grain texture.

Currently Expresii only simulates water-based art media.  We plan to add other media like pastel and oil in the near future. Stay tuned.
Update: The following video shows  the use of 'vexel rendering' in actual painting and that we are able to export 32k images thanks to our ultra-zooming capability:

​Why the term 'vexel'
When developing new tech to avoid seeing fat pixels in digital painting, I wanted to give it a name that suggests it's a combination of vector and pixel,   and   I came up with 'vexel'.  Later I found out 'vexel'   is already coined by Seth Woolley​  since at least  2006 to refer to raster images that look like vector graphics.   I tried to find another name, but couldn't find one giving the same level of meaningfulness.  So,  I decided to stick with 'vexel'. After all, our rendering does look like it's vexel art in Seth's definition.
Comments

Discussion on supporting Animation Production and more

30/7/2021

Comments

 
Earlier this year in April, our Dr. Nelson  Chu was invited to give  a talk at the   ACAS Inaugural conference.   As you may know, Expresii has been used in the production of a few animated shorts and the ink-painting-styled feature animation Red Squirrel Mai   (although the production of Mai  may be halted).  Here we'd like to elaborate more on the questions raised during the conference for Nelson:

Q: From your lecture, we have seen the many possibilities of traditional ink painting in digital art. I think it's a combination of traditional art and computer skills, as well as a deep understanding of animation. How do you manage the relationship between them?  

A: Hiring multi-talented people can help bridge the gaps. I believe they have a higher chance solving problems that have not been solved before. I have been lucky to have some tech-savvy and self-motivated artists to work with. For instance, my friend   Shuen Leung, who has been using Expresii since its early versions, can produce nice work without me giving much instruction.   Whenever I have some new functionality added to the program, she would just explore  it and see what she can come up with it.   Our (then) director   Angela Wong, soon recognized Shuen's abilities and invited her to join the production team. Having Shuen in the team had been a privilege.   She's also a kung fu practitioner, who can inform us on the kung fu moves that would appear in the Mai movie.  

In my role, I simply try my best to fulfill the program feature requests  from the directors.   They are the actual ones managing different people in the production team. I can't speak for them, but from what I've observed, putting people in the right roles would definitely help make things smooth.

Q: How can experts in different fields work well together?

A: Mutual respect and good communication are important. Try understand others' point of view and/or the constraints they have.  For example, many of us have opinions on how ink-painting-styled animation should be like. We may offer our views and in fact, there has been some heated discussions. Ultimately, it  would be the film director to decide. For the production to go on, we let those in the managing positions do their jobs.  As a decision maker, one has to consider all the opinions and make appropriate choices.  It surely is not easy.  
​
Picture
Q: Is there  an inherent contradiction  between the serendipity of ink painting and the standardization resulting from computer algorithm or automation?

A: I actually don't see a contradiction here.  What you do depends on the project at hand. As an artist in general, I think serendipity is great and that's mainly why I started building my own tool for brush painting.  But for animation production,  we hire many animators to work on the same film. We need standardization  (e.g. style guide) so that the resultant animation looks coherent. Another reason for  standardization  is that we want to be able to control the amount of flickering between frames due to stroke differences. Some say, "if it's repeatable, it's not art". But here you actually  want the strokes to be repeatable with desired parameters! ^_^  Imagine, when you're half way done with your frames, your art director comes to you and asks, “Can you make the tentacles of the octopus a bit thicker?” If your  strokes are auto-generated instead of hand-painted, you can simply change the   parameters and have the computer re-render it!

Q: How do you see the future of ink-painting-styled animation (水墨動畫) ?

A:   Previously, only large studios can afford to do research  work.  They develop new algorithms for, say, simulating animal fur to support their story telling.  Nowadays, more individuals are learning programming, and one can come up with new algorithms specially designed for ink-painting-styled animation so that they don't need to rely on existing software, which may not be suitable for the effects they are after. That said, if some party is determined to invest on new ink-painting-styled animation, they still need to be patient, as developing and putting tailor-made new solution into a workable pipeline usually require a lot of experimentation.  The good news is that Angela Wong has prototyped a workable solution that is   already    used in the animated short Find Find  (as discussed in my talk). We just need to further develop it and put it to the test more. Let us know if you're interested to join this journey.

Q: How can artists or creatives get more involved in computer programming or on the technological side?

A:   They can learn programming online for free. Just do a search on creative coding  and you will find a bunch of tutorials and samples. I think that's exactly how Angela Wong learned programming. I believe you will be easily inspired to use code to do your next project!

Q: A really interesting point was bought up about how technology might be infiltrating art and calligraphy. On that note, what are your opinions on using Artificial Intelligence (AI) for storyboarding  and animation? Do you think it will enhance our experiences  or just make us obsolete?

A:   Current state-of-the-art AI  needs a large number of samples as input for it to be able to learn some pattern in the samples and apply that same pattern to perform certain tasks.  That means it can not be creative all by itself yet.  Computers are very good at optimization. If you have well-defined goal or cost function, computer programs can find the optimal configuration that minimizes that cost.   

I think the general public's  view towards AI may be skewed by movies or fictions, thinking that AI can already think almost like a human.  No, we're still quite far from that. And, speaking of AI, beware there're some  opportunists trying to fool you with  fake AI!   


​Afterword

When Daisy Du of ACAS first contacted me in 2019, I'm a bit surprised that there is an academic association dedicated to Chinese animation studies.  I'm glad to see such an enthusiasm towards ink-painting-styled animation from humanity scholars.   I hope more resources can be given to accelerate modern development of CGI ink-painting-styled animation.

Here, I'd like to show you a 2018 tweet by a Japanese CG artist commenting on the lost tech of ink painting animation (水墨画によるアニメーション): "When would a hero using Expresii to make animation appear?"
Picture
Japanese CG artist 宍戸幸次郎: Expresiiでア二メ作る猛者はいつ出るのか (何時有猛者用Expresii 做水墨動畫?)

Really, you just need the right people  in the right roles. Will you be the next hero(s) or be one that help bring up   such hero(s)?   千里馬常有,但伯樂 不常有。在殘酷的社會環境,人才往往給埋沒掉。但願多點人在能力範圍内,可讓千里馬得到發揮。
Comments

Programmable keypad with dials

21/6/2021

Comments

 
We got ourselves a programmable keypad with multiple dials from Taobao for RMB 155 (USD 24). We found this one to have the highest cost-to-performance ratio among other alternatives.
Each dial has three actions : left turn,   right turn, and click. The keys uses mechanical switches. The device  can also mimic a Surface Dial. Eight layers of mappings. Supports macros. 
The software tool for setting up the keys is in Chinese only and it seems they currently only focus on the domestic market. Hopefully they would sell to global markets  soon so that you can get and use one. 
Picture
They also offer a cheaper model that has three dials only (image from one of the reviews in Taobao)
Comments

Sleek 4K 15.6" Pen Display with MPP 2.0 Pen & Touch Review

21/6/2021

Comments

 
The cheapest 4K EMR pen display right now is not cheap (USD 829+).  Yet, they do not support multi-touch gesture, and are still a bit bulky.  Those who have used 2-in-1 or tablets would understand how much  I missed  touch gestures when I switch back to using a traditional EMR Pen Display from Huion or XP Pen. 

new Challenger: Portable Display with pen support

Portable displays - a    new category of device  that has emerged in recent years.   They have progressed quite a lot - you can now connect with a single USB type-C cable and have it powered entirely via the USB cable, its resolution getting up to 4K,  with touch and now pen support.   
​

What we tested here is a portable display from ehomewei  (their store at Amazon), which looks exactly like this  15.6"  pen   display branded as LUNE sold as a   kickstarter-like item  (at USD512 early bird)  in Japan. We got a model that doesn't have a G-Sensor, which allows automatic screen orientation change , from   TMall  at RMB2118 = USD332.
We tested the display with a Surface Book 2, Surface Go, Intel i5 NUC 8 and a desktop with Nvidia GTX 1060. The first three support USB type-C display out for connecting the display using just a single cable.  

They claim the screen itself is 4mm, but our measurement  reads more like 5.5mm.  Together with the stand, our measurement    reads more like 14mm instead of the claimed 10mm.  Nevertheless, it's still very thin.

The MPP 2.0 Pen 

The pen  it supports is an active pen complying to the Microsoft Pen Protocol (MPP) 2.0.   The included active pen requires AAAA battery to operate just like the Surface Pen, but you can get yourself one that uses rechargeable battery.

Tilt sensing even during hovering. Yes, this device supports pen tilt even during hover! This is something this display being better than a Surface device.  

Buttons bug.     looks like this display has some bug with button signals. We don't get an eraser signal when we press the lower side button of the included pen.   In fact, all tip and the two side buttons give button = 1 if we look at the Diagnostic page of Expresii. That means the side buttons are useless, at least for painting in Expresii. Using another MPP 2.0 pen,  we are able to button = 2 when we press the upper side button, so that means we get back the usage of at least one side button. The same pens give eraser signal and button = 2 when we press the lower and upper side buttons respectively on a Surface device, so that's definitely something the ehomewei is missing.  Testing further,  we found that some apps like Autodesk SketchBook pro gets eraser button signal, but others like Paint Tool SAI 1.2.5, Medibang Paint Pro 26.2 (v2.1.21), and Krita 4.4.5 (using Windows Ink) don't on the ehomewei.  On Surface Book 2, SAI, Medibang and Krita all respond to at least one of the side buttons of the same pens as eraser or color picker. We believe this is due to the ehomewei hardware only supporting certain pen API. Hopefully, ehomewei  can update their firmware to get this fixed in the future.

Multi-monitor.  Wacom's settings tool allows us to map the pen input to either of the monitors if you have say two monitors connected to your PC. There's no such a tool for ehomewei, but we were able to change the mapping by checking 'make this my main display' for the monitor we would like to map the pen input to in Windows 10's setting.

The 4K Screen

The screen is very bright, comparable to a Surface device.   We are unlucky to receive one that has a little dust behind the glass surface. That dust is several-pixel-wide so its quite visible.  It's rather troublesome to send it back so we may just have to live with it.   (╯_╰)
Picture
Unlucky to get one piece of dust under the screen.
Picture
Windows 10 says the device is not HDR capable.
Picture
We get 8-bit instead of the advertised 10-bit color.
We were not able to get HDR nor 10-bit color working using  Intel iGPU Iris 655  or nvidia GTX 1060 using type-C or HDMI cable respectively. Not sure why.  They have a built-in 'UltraHDR' option, but as far as we can tell, it only makes the color too bright and saturated - which is rather useless.

We do not have any color calibration hardware  to test the gamut but as far as we can tell, colors are rather different from that of a Surface Book 2 out-of-the-box. 

The Stand

The flip stand is handy.  When collapsed, the whole thing is only 14mm thick.   The hinge bends from 0 to 90 degrees, and by itself, we found it hard to to stand if you use it in portrait mode  (see video on right).  You can place the display in a 'wedge mode' like using a drafting table.    All the connectors and button control are on the two sides of the stand.   There's no VESA mount holes, which are probably too thick to be hosted on the screen itself.  I still wish there's a VESA option - maybe they can fit two holes in the stand? Two holes should be enough given the light weight.

Misc.

Speaker is included but its sound is tiny.  Playing YouTube videos, the volume is so low that sometimes it's hard to listen to people speaking.  We asked the manufacturer if the other model's built-in G-Sensor can act like a Windows tablet's G-Sensor. The answer is no, so we can't really use it for surface tilting to direct paint flow in Expresii. ​
You can even pair the pen display with a programmable keypad with dials for a clean setup:

Drawing experience

The host PC is an Intel " Bean Canyon " NUC 8 i5 (2018 released; 14nm) with Iris Plus 655 GPU, 8GB RAM. We used this PC because it's compact and support single-cable connection to the display. That 655 GPU is fast for Expresii at FUD display resolution but a little slow at 4K. Anyway, as you can see from the demos, it's still quite usable. If you use a  newer NUC like the latest 11th-gen with Iris Xe GPU (10nm), you should get quite a performance boost.
The drawing experience is very much like that of using a Surface device. Same pens stroking on glass surface, except you now have tilt sensing during hover. The display moves back a bit when you stroke on it,  just like using a Surface Book  in laptop mode. Using it in the 'wedge mode' is stable, but the angle is kind of fixed.  You may want to use a adjustable stand if you really want to use it extensively to paint.

Accuracy.   The cursor my be a bit off near the screen edges.  In comparison, the cursor on my Surface Book 2 follows more closely to the pen tip.

Verdicts

PictureCompared to a tradtional EMR pen display with thick bezels
Today, there're basically two major pen sensing tech's: EMR and Active pens. Traditional pen tablet and display use EMR (the ones offered by Wacom, Huion, or XP Pen) and they usually come with thick bezels.  Active pens include all the MPP pens (like the newer pens from HP, Dell, or Lenovo ) and also those from the USI camp. They require battery and the sensing modules are usually found as computer built-in's instead of peripherals.  Wacom's EMR offers the best in terms of capabilities, as it can sense tilt even during hover, and is the only solution that can sense barrel rotation. Now, this ehomewei display offers tilt sensing during hover too and in this regard, it means you get something even better than what a Surface device gives you.   The pen is interchangeable with other Surface Pen compatibles so you have many choices from different brands.

The ehomewei display   is essentially a very thin  and bright 4K screen that comes with pen and touch input support. When we  first saw the Microsoft Surface  Studio, we wished that Microsoft would sell the pen-and-touch display separately so that we can upgrade the computer unit instead of having to replace the whole all-in-one computer. Now, this portable display is basically what we asked for.

We're not sure when this would become  available globally. ehomewei currently sells their older non-pen models on Amazon. If it does come to your country, you should give it a serious consideration.

2021.07.08 Update:  Shuen Leung reported that the ehomewei gives wobbly lines when one tries to draw straight lines slowly.   Now, we put our hope for the perfect portable pen display at the XtendTouch Pro, which is supposed to give much better diagonal straight line performance.   We can't find an actual purchase link  after their kickstarter campaign ended.  Hope they're still shipping.

Comments

Using Expresii Watercolor with Sonar Pen via SpaceDesk

2/5/2021

Comments

 

The Sonar Pen

The SonarPen was launched through Kickstarter in 2018.  It was primarily designed for iOS and Android smartphones and tablets as a cheap alternative to the expensive styli like the Apple Pencil. 

We actually have been in contact with  SonarPen's creator Elton Leung since 2018.   We were hoping they can add Windows support ever since. To date, there's still no Windows driver but thanks to the support from SpaceDesk, we can now use it for Windows apps too.
Picture
The Sonar Pen
Picture
Sonar Pen with a disk tip and a side button
We also wished it supports tilt sensing but we're not sure if that would ever happen.

The SpaceDesk app

SpaceDesk is a software tool that allows a host machine to use other machines like tablets as its monitors.  Their ability to build a monitor wall is amazing. Recently they added support for relaying pen input too, so it's possible to use SpaceDesk as a wireless Pen Display. With SpaceDesk, we're now able to use SonarPen on Windows apps, including our Expresii that features beautiful organic digital watercolor. 
Previously, we tried   Easy Canvas Pro  (US$5 / year) and   SuperDisplay   (US$10 one time)   for Pen-Display application. Currently, we think SuperDisplay  gives the best performance among the three.  SuperDisplay  does support pen tilt, which is quite important for our app Expresii.   SpaceDesk doesn't support pen tilt nor  multi-touch gestures yet. SpaceDesk is currently free to use, while they also offer a Business license. Our current recommendation is still   SuperDisplay, which is the fastest, the most feature-complete and is inexpensive.   We look forward to SuperDisplay supporting the SonarPen.

2021.05.06 Update:  you can use USB tethering for faster connection on SpaceDesk.

What can still be added: ​It'd be great  if any of these programs can relay accelerometer/orientation sensor  readings and act as a G-Sensor  of Windows tablets so that we can directly control paper tilt in Expresii to direct the paint flow. Using other apps like  Sensor stream IMU+GPS  alongside is no good, as only one  app can be in the foreground and nowadays smartphones would have background apps dormant  after a few moments. We've tried putting the app in Battery Saving Exception or disable Battery  Optimization (on Android 9) but to no vain.

In the above video, we also show the new features of  Brush Tilt Control Pad    and   Auto Settle Timer:

Brush Tilt Control Pad

This is specifically added for the case of using SonarPen via SpaceDesk, since the pen is not tilt-sensitive and that SpaceDesk doesn't support pen tilt yet.  For other pens, usually their barrel (side) button can be re-assigned by the user, and we can use it as Mouse Right Button to adjust the brush tilt. However, in SpaceDesk, SonarPen's side button is fixed to toggle between touch and pen input. Our answer to this is the Brush Tilt Control Pad .  It acts like a virtual trackpad on screen - you can  use your finger, mouse or pen to drag on it to adjust the brush tilt.

​Paint Settle Timer

Picture
You can now stack paint in the same layer by settling paint. We've also added a timer for auto-settling. You can try these new features in a beta version of Expresii, which you can enable by selecting beta option in the System>Advanced tab. Please let us know how you like them.

Picture
Selecting Beta mode
Comments

Expresii running on M1 machine via Parallels 16.5 / 17

18/4/2021

Comments

 
Expresii user  Steve Kim just sent us a screenshot of Expresii   running on his Apple M1 machine on via Parallels 16.5:
Picture
We're amazed that it actually runs fine!

Steve shared: "At  4K screen resolution, I was getting 60 FPS during strokes and around 80 while not stroking. Full screen w/ the default landscape canvas and the largest brush. 1440p was 80 FPS while painting, 120 FPS when not stroking. The VM was set to 6 virtual cores and 8GB of ram on my 16GB M1 Mac Mini running the latest ARM build of Windows 10 via Parallels trial."

The installation process was like documented in the following video:
2022 update: it's running well on M1 Mac Book Air via Parallels Desktop 17. We get 80+ to 50+ FPS on full-screen:
Comments

Expresii on HP Envy x360 laptop / tablet convertible

16/4/2021

Comments

 
How does a 2020-released HP  laptop convertible perform with Expresii? We had a chance to test   an 13" HP Envy x360 with AMD 4700U APU​. The machine is compact and sleek.  It's so thin that they have the USB-A ports use an expanding door design, which you may find it a bit troublesome to use as you can't simply push a USB head straight into the port but needing to make the door fully open first. Some call a hybrid / convertible a 2-in-1 but note that this HP uses a folding design and the keyboard is not detachable like the Microsoft Surface Pro for it to be truly a thin tablet. 
Picture
2020-released HP Envy x360 13" with rechargeable MPP 2.0 HP tilt Pen

Very good performance

We test the use of G-sensor for paint surface tilt, the HP tilt Pen for virtual brush manipulation, Radeon RX Vega 7 GPU for paint simulation:
The HP machine does run hot when if we continuously make strokes for a while. However, the hot part resides in the bottom side near the hinge, which we won't touch normally.  We are pleased by the APU performance, which gives 100+ FPS on the laptop's FHD screen resolution.   Our unit has 8GB of RAM.

The new HP Tilt pen could be better

 Having tried the RENAISSER   Raphael 520,   the HP tilt Pen feels less sensitive on the initial activation force.  You have to press a bit harder in order for the lines to get registered.  The HP tilt Pen feels solid with its metal body.  The USB type-C charging port is hidden in a sliding door. 

The HP tilt pen doesn't have a   soft tip like found in the Raphael 520 or the MS Surface pen. When used on the glossy screen of the HP machine, it feels slippery. This could be the reason why you see the lines in our tests are wobbly.

Conclusion

Overall, we think the HP Envy x360 is a solid machine, but for better art making, we recommend replacing the included (included at least in our local market) HP tilt pen with cheap  but more sensitive   Surface pen alternatives (they are all compatible under the MPP 2.0 protocol), and you will have a great drawing experience. 

There's also 15" sibling models   if you need a bit larger display. Currently, we don't recommend the Intel integrated GPU models, since there's an issue with Intel GPU artwork exporting, so make sure you pick up the model with an AMD or Nvidia GPU.  There's also newer 2021 models with AMD 5000 series APU announced last month that you can already   buy now .
Comments

Proposal: 基於筆劃的字型設計 Stroke-Based font design

4/4/2021

Comments

 
目前字體設計都是以拉曲線(curve)形式來進行,這裡curve 是指字體的輪廓線(outline),這讓調整效率偏低,如一捺要變窄,則要左右邊的curve 都調整。這是字體設計師許瀚文​  Julius Hui 的示範 :
Picture
雖然 GlyphsApp 有所謂‘Smart Component’ 功能可以先把同一個部件的幾個不同形態設好再interpolate得到中間形態,但始終還是用 outline 來定義,有時interpolate 出來還是有點怪怪的。

筆者覺得這尤其不配合中文書法體的製作:書法字本來就是一筆一筆寫出來,但掃描進電腦也是一大堆curve點,要簡化和一條條曲線調整,筆劃也不能自動分離來方便修改。以下是 Justfont 的示範:
近年有歐洲人提出用 skeleton-based 的辦法,革新傳統的造字系統,這與筆者的想法非常接近:
他們正在開發skeleton-based 工具 Letterink。上篇notes《電腦改變漢字未來》也提到,書寫工具(西方是硬筆,東方是毛筆)決定了藝術形式,Letterink 目前非常適合歐文字,而他們的pipeline 最終還是輸出為outline font,即配合目前的所有的font 系統。他們漸漸成熟,如字劃的頭尾部分也想好怎麼弄。

而我們 Expresii 的engine 目前是 stroke-based 的,即一筆一畫寫出來,筆劃有先後次序:
這樣產生的中文字型,我們當然可以更改筆的粗度、水分、紙材等參數以求不同的效果,但問題是不好嵌入到現在的outline font 系統,因為飛白、墨色變化等東西在傳統的outline font 系統難以表現。筆者跟字體設計師 柯熾堅老師 Sammy Or 前輩談過,他就很喜歡我們的墨色變化呢!
Picture

針對自己文化度身訂做自己的系統

故筆者覺得,如果要真正表現中文書法體的特性,則要重新建立新的基於筆劃的字型系統才行。這工程艱鉅,因為當所有人都依賴傳統的outline font system時,沒有人會轉用你的新系統。這有如一個用110v電的國家要全國轉220v那麼困難。但也不是沒可能,如某強國下定決心要開發自己的電腦操作系統,則有機會也一拼開發自己的字型系統,配合自己的操作系統。

理想地想,將來大家都用電子筆寫字,而寫出來的字也透過軟件有書法字的效果,又免去開壇、洗筆等麻煩,則中國書法可在數碼領域再發楊光大,到時也不用像  一些書法修復計劃 那樣,掃描再慢慢變curve做字了。

我父輩寫的字大都很好的,只是現在寫得好的人越來越少。希望我們的書法文化不須‘保育’(conserve,是因為太少人做了才需要‘保育’),而是換一個適合現在的方式讓大家一直寫下去。到時大家一起在新時代寫數位書法!

技術上的可行性

Outline-based  font 有個好處,就是簡單,方便用於像Postscript 那樣的頁面描述語言裏。最初鐳射打印機也配備晶片運行自己的rasterizer,可以直接執行Postscript 指令。現在電腦運算力強,rasterization   都在電腦裡用軟件實現, 打印機則不必自備rasterizer。

而筆者現在的proposal,用虛擬毛筆畫出stroke-based 字型,是基於利用現代電腦的GPU 運算力。現代電腦的GPU在這20年內進步很快,現在即便是 integrated GPU的運算力也追上來了,故筆者認爲是可能的。當然具體軟件工程,仍有待進一步優化實現:因爲目前Expresii 要把字寫每筆出來,有寫的‘過程’,畫出一個字會較慢。日後,可能要開發些parallel 筆劃等的演算法,加快rendering速度,才能配上電腦屏幕顯示和打印的應用。目前Expresii 畫了一個字,只要保留intermediate 結果,其rendering engine 是可以很快得render出不同大小的。所以cache 系統也要考慮,做出優化的新字型系統。如最終不夠快做内文顯示,其實也用於標題字款,反正書法風的字,比較適合標題而非内文。
Comments

來一起參與一場書法革命 Digital Chinese calligraphy Revolution

4/4/2021

Comments

 

摘要

一般硬筆的藝術性比毛筆差,因為筆觸變化沒那麼大。但現在電子筆透過軟件,藝術性變得媲美甚至超越毛筆。
​
筆者很希望​學校考慮推廣電腦書法教學。我們辦學,是為準備孩子應付未來30年的挑戰。現在電子產品越來越流行,電子書法必然會發展出來的。未來的人會更多時間用電子筆,就像現代人用硬筆多於毛筆一樣。來,讓我們一起參與這場書法革命!

毛筆→硬筆→電子筆

自從西方硬筆傳入東方,我們日常書寫已棄毛筆取硬筆,硬筆之普及也讓我們發展出‘硬筆書法’。​嚴格來說,現在的電子筆是硬筆的一種。試想想,電子產品之應用也越催普及,電腦書法乃必然發展出來。而筆者一直努力深耕,開發書法水墨軟件Expresii,目前已經可寫出比美毛筆書法的作品,毛筆多變之特性比任何其他軟件都好。

電腦書法(中國大陸叫‘數字書法’;台灣叫‘數位書法’)之藝術性是介乎於毛筆跟硬筆之間。數位有硬筆 的方便,但也只有約一毫米的按壓距程,跟真毛筆約一寸有差。

以下是在微軟的Surface Book 2 上書寫示範:
少數前輩如蔡瀾先生,比較開放。他有嘗試iPad 寫書法,故也拜訪請益。我給蔡先生試用筆者軟件Expresii,他覺得好過ipad 上那些app:

必須軟硬件配合

但蔡生覺得筆還不完善,我說硬件方面我無能為力,我只做軟件。他則叮囑我要跟電子筆製造商一起完善電子書法體驗。其實我多年也有接觸硬件商Wacom 給意見,但公司太大,意見是否真的傳達到決策者則不得而知。我也嘗試跟大陸品牌Huion 、友基等聯繫,可惜更難接觸。大概他們都忙於佔領能賺錢的數位板市場,沒空照顧電腦書法。

用筆改變

傳統硬筆因筆尖只能按壓約一公分之壓距以做出粗幼變化,故關鍵是把用筆動作‘縮小’,才能做出如毛筆般粗幼變化。數位筆也有類似的情況:雖然軟件裏可以改變粗幼變化幅度,但人手運筆,還是要以動作縮小,才能達到韻律式的 ‘提按’操作,寫行書、草書時尤重。這是我多年寫電子書法的心得。

數位的好處

無損複製

水墨作品能複製嗎?可以,有榮寶齋的木板水印工藝。雖然這工藝被譽為‘絕活’,但是要人手把原圖分成‘圖層’(所以非所有畫作都能複製),再把圖層雕刻於木板上,再人手於木板上色一板一板的印圖,工序非常複雜。但如果是數位作品,圖層固然有,當然複製或網絡傳送,也可無損​。

無縫改圖

宣紙上的墨跡很難修改,所以水墨畫被譽為世上最'真誠'的畫種。但就算名家如徐悲鴻,有時也希望可以修改:榮寶齋曾替​徐悲鴻以 木板水印方法複製一幅奔馬圖,​​徐悲鴻覺得馬腿畫太長了,問可否於複製品中修改呢。在電腦世界,數位作品是可以無縫修改。有修改的能力,你可以不用,但也難否定這能力的用處。

揮灑重演

繪畫過程是可以記錄下來,重複演示。這其實對教學很有幫助,因爲東西本身已經是數位格式,要了解過程不需要用攝影機拍著老師作畫。繪畫過程只記錄用筆用墨的數據,檔案也非常小,方便網絡傳送。

總結

電腦的出現,并不代表我們摒棄傳統。為了迎合電腦時代,電子書法的開發,正正是需要我們去耕耘,把傳統延伸到數碼領域。

筆者推廣電子書法時,通常放眼年輕一代,因​老一輩手熟於傳統毛筆,通常會要求電子筆跟傳統毛筆做到一模一樣,但這其實受硬件限制,目前仍難以實現*。目前硬件生產商較着重發展模仿西方硬筆,希望日後硬件商多多跟我們交流合作,一起完善電子書法體驗。

教育部門如認同筆者愚見,也希望也能撥出資源,一起讓電腦書法普及起來!

*  全球最大電子筆生產商Wacom 乃日本公司,日本人當然有想過要電子筆模仿毛筆,但這30年歷史的公司仍未做有效模仿毛筆的電子筆(不是只是把毛束當一個點而已,如   Sensu Brush)出來,可想這不是那麼容易的事情。真筆沾水用在電腦繪畫有LightStrokes,但其還有問題:1.沒有cursor 你要猜下筆位置。2. 濕筆在玻璃上滑動完全不像在紙上寫字般,寫書法的話比較難操控。這些問題仍有待解決。
Comments
<<Previous

    Expresii 寫意

    Previous Posts
    以前的文章

    November 2022
    October 2021
    July 2021
    June 2021
    May 2021
    April 2021
    March 2021
    February 2021
    December 2020
    November 2020
    October 2020
    August 2020
    June 2020
    April 2020
    January 2020
    December 2019
    November 2019
    October 2019
    September 2019
    August 2019
    July 2019
    May 2019
    April 2019
    March 2019
    February 2019
    January 2019
    December 2018
    November 2018
    September 2018
    July 2018
    June 2018
    April 2018
    March 2018
    February 2018
    January 2018
    December 2017
    October 2017
    August 2017
    July 2017
    June 2017
    May 2017
    April 2017
    February 2017
    January 2017
    December 2016
    November 2016
    October 2016
    September 2016
    July 2016
    June 2016
    May 2016
    April 2016
    March 2016
    February 2016
    January 2016
    December 2015
    October 2015
    September 2015
    August 2015
    July 2015
    June 2015
    March 2015
    February 2015
    January 2015
    December 2014
    November 2014
    October 2014
    September 2014
    August 2014
    July 2014
    June 2014
    April 2014
    November 2013
    October 2013
    July 2013
    February 2013
    January 2013
    December 2012

    Categories 分類

    All
    3rd Party Tool
    Animated
    Artists
    Calligraphy
    Demo
    Hardware
    Mac
    New Feature
    Publicity
    Purchase
    Surface
    Tech

    RSS Feed

Picture
© Expresii.com 2023. All Rights Reserved.