THE ELEMENT IS VISIBLE NOW YOU R GOING WAIT TOO FATBOY
“ElementNotVisibleException.” Fortunately, we can deal with this issue by using “waits.” Selenium provides two types of waits — implicit & explicit.
An implicit is used to tell the web driver to wait for a certain amount of time when trying to locate an element. In Python, you can even import the time
library and then make an implicit wait with time.sleep()
and specify the seconds to wait within parentheses
On the other hand, an explicit wait makes the web driver wait for a specific condition (Expected Conditions) to occur before proceeding further with execution. Let’s see an example.
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as ECdriver = webdriver.Chrome()
driver.get("any_website")title = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.XPath, "//topic[@area='programming']")))driver.quit()
First, we imported expected_conditions
and WebDriverWait
. Then we made the web driver wait for 10 seconds until an element with XPath “//topic[@area=’programming’]”
shows up in the DOM.
Some common conditions frequently used are presence_of_element_located
, visibility_of_element_located
, presence_of_all_elements_located
and element_located_to_be_selected.
5. Make web scraping easier with WebDriver’s Options
I can’t tell how many times I tried to come up with ways to solve issues on my own when scraping such as blocking ads or browser notifications. However, you can easily deal with these and many other issues a browser has by using Selenium’s Options. Let’s look at an example.
from selenium import webdriver
from selenium.webdriver.chrome.options import Options# create instance
options = Options()#add arguments
options.headless = True #headless mode
options.add_argument('path_of_extension') #add an extensiondriver = webdriver.Chrome(options=options)
...
In this simple example, we imported Options and created an instance. Then we customized the default options Selenium has to make scraping easier (based on the problem you want to solve) For example, if the web contains ads that interfere with the data you wish to scrape, you could disable ads by loading an adblocker extension and specifying where it’s located in your computer inside the .add_argument()
You can find other things you can do with WebDriver Options in the Selenium documentation.
That’s it! With these concepts, you’re ready to easily learn web scraping. Once you extract the data, you will need to clean it. I already made a guide on how to clean data that you can find below.
A Straightforward Guide to Cleaning and Preparing Data in Python
How to Identify and deal with dirty data.
78
78
Enjoy
SPIDERS, CHERYL, BUR, TRUMP, PLAGUE,CANCER, BASEBALL PLAYERS , FANTASY NFL, FCEBOOK, CHAMPIONS, OTB, SET UPS, POLITICS, RAW DATA , RED ROOMS, REDUM, WINE, OWL, ACTIVE ADMIN, RAYMOND ROSS, TRASH, WOLVES, BLACK BEAR , OWLS , DIRT, DATA, BB, GARBAGE, PIZZA, TACOS, WINE
- migration path to modern C++ for legacy OWL applications written in Borland C++.
- Support for modern C++ and compilers from Embarcadero and Microsoft.
- Corrections and improvements to the original OWL API for more robust code.
- New features, such as Dialog Data Transfer and Safe Transfer Buffers.
- 32-bit and 64-bit targets for Windows XP/Vista/7/8/10/11.
- Additional extension libraries (OCF, OWLExt and CoolPrj).
Project Samples






Project Activity
- 9 hours ago
Vidar Hasfjord modified a wiki pageFrequently_Asked_Questions
- 9 hours ago
Vidar Hasfjord modified a wiki pageFrequently_Asked_Questions
- 10 hours ago
Vidar Hasfjord modified a wiki pageFrequently_Asked_Questions
- 11 hours ago
- Things You Should Know to Easily Learn Web Scraping
- Make learning Web Scraping less difficult.
- Web Scraping is the process of extracting data from a website. Learning Web Scraping could be as easy as following a tutorial on how libraries like Beautiful Soup or Selenium work; however, you should know some concepts to understand better what these scraping tools do and come up with effective ways to tackle a task.
- In this article, I made a list of 5 things I wish I knew when learning Web Scraping. They are either concepts you should understand before learning Web Scraping or advice to make your code more robust when scraping.
- 1. HTML Basics for Web Scraping
- Before you start learning any Web Scraping library, it’s a good idea to get used to the elements of an HTML document and how web pages are structured. For example, let’s take a look at the following HTML code I wrote.
- <article type=”basic”>
- <author>John Doe</author>
- <title>
- <topic area=”programming”> Learn Python </topic>
- </title>
- <date>2021-04-09</date>
- </article>
- This code represents a web with an article titled
Learn Python
published on2021–04–09
byJohn Doe.
However, if you only read the code, you will see a document structured using “nodes” like the one above. There are element nodes, attribute nodes, and text nodes. Let’s identify each node with the following tree structure. - In the tree above, each rectangle represents a node. Also, the tree shows hierarchical relationships between nodes (parent, child, and sibling nodes). This is just a simple example of how elements will be displayed. In reality, a website might have multiple nodes that would be harder to understand. Let’s identify the relationships between nodes.
- The “root node” is the top node. In this example,
<article>
is the root here. - Every node has exactly one “parent”, except the root. The
<author>
node’s parent is the<article>
node. - An element node can have zero, one, or several “children,” but attributes and text nodes have no children.
<topic>
has two child nodes, but no child elements. - “Siblings” are nodes with the same parent.
- A node’s children and its children’s children are called its “descendants”. Similarly, a node’s parent and its parent’s parent are called its “ancestors”.
- Children and parent nodes become vital when you can’t find a particular element but only its parent or child. In Selenium you can use
.find_element_by_xpath(‘./..’)
to find a parent node. - 2. There are many Web Scraping tools. Choose the best for your project.
- There are many web scraping tools available, so before spending time learning how to use any of them, analyze which one suits better to your project. In Python, some of the popular options are Beautiful Soup, Selenium, and Scrapy. Here are some of their advantages and disadvantages.
- Beautiful Soup: Without a doubt, this is the easiest tool to learn among the 3 options; however, it has some dependencies, such as the need of the
request
library to make requests to the website and the use of external parsers to extract data (XML and HTML). These dependencies make it complicated to transfer code between projects. - Selenium: Unlike Beautiful Soup, Selenium can help you extract data from websites that rely on JavaScript to create dynamic content on the page. However, one of the disadvantages of Selenium is speed because all the scripts present on the web page will be executed.
- Scrapy: If speed is a priority in your project, you should learn Scrapy. Scrapy is asynchronous, so Scrapy spiders don’t have to wait to make requests one at a time, but they can make requests in parallel. This makes Scrapy memory and CPU efficient compared to the previous web scraping tools analyzed. Unfortunately, learning Scrapy might not be as easy as learning Beautiful Soup.
- You can find more details about each tool in the article below.
- Web Scraping with Beautiful Soup, Selenium or Scrapy?
- Find the best scraping tool for your Python project.
- towardsdatascience.com
- 3. Effective ways to locate an element
- There are different ways to find an element on a page. You can find elements by id, name, XPath, tag names, class names, etc. Although you can use any of them, it’s recommended to try in this order.
- ID
- Class name
- Tag name
- Xpath (selector only available on Selenium and Scrapy)
- This happens because the id is unique, so we’ll certain that we’re going to pick the element we want and the process will be faster. However, if we find elements by class names or tag names, we will get the first element with that class or tag name, which might not be the one you’re looking for.
- Of course, you won’t always have a suitable id or name attribute for the element you wish to locate, so this is when Xpath comes useful. You can use XPath to either locate the element in absolute terms (absolute path) or relative to an element that does have an id or name attribute (relative path).
- There are a couple of important things to keep in mind about these 2 paths
- Absolute XPaths contain the location of all elements from the HTML root, therefore, they’re likely to fail with the slightest adjustment in the web. For this reason, it’s not advised to use absolute XPaths. The following is an absolute XPath that locates the “topic” element in the HTML code we’ve seen above.
- topic = driver.find_element_by_xpath(“/article/title/topic”)
- With relative paths, however, you can find elements based on the relationship with nearby elements (parent, children). This is less likely to change, which makes your code more robust.
- topic=driver.find_element_by_xpath(“//topic[@area=’programming’]”)
- With the code above we used the
area
attribute andtopic
tag as references instead of locating the element from the HTML root as we did before. - The following advice is especially related to Selenium.
- 4. Implicit Waits vs Explicit Waits
- Understanding what implicit and explicit waits do in Selenium is key when working with applications developed using Ajax and Javascript. This happens because when these pages are loaded by the browser the elements which we want to interact with may load at different intervals.
- As a result, an element might not be located in the DOM (Document Object Model aka the tree structure we’ve seen before) when scraping, so we’ll get the “ElementNotVisibleException.” Fortunately, we can deal with this issue by using “waits.” Selenium provides two types of waits — implicit & explicit.
- An implicit is used to tell the web driver to wait for a certain amount of time when trying to locate an element. In Python, you can even import the
time
library and then make an implicit wait withtime.sleep()
and specify the seconds to wait within parentheses - On the other hand, an explicit wait makes the web driver wait for a specific condition (Expected Conditions) to occur before proceeding further with execution. Let’s see an example.
- from selenium.webdriver.common.by import By
- from selenium.webdriver.support.ui import WebDriverWait
- from selenium.webdriver.support import expected_conditions as ECdriver = webdriver.Chrome()
- driver.get(“any_website”)title = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.XPath, “//topic[@area=’programming’]”)))driver.quit()
- First, we imported
expected_conditions
andWebDriverWait
. Then we made the web driver wait for 10 seconds until an element with XPath“//topic[@area=’programming’]”
shows up in the DOM. - Some common conditions frequently used are
presence_of_element_located
,visibility_of_element_located
,presence_of_all_elements_located
andelement_located_to_be_selected.
- 5. Make web scraping easier with WebDriver’s Options
- I can’t tell how many times I tried to come up with ways to solve issues on my own when scraping such as blocking ads or browser notifications. However, you can easily deal with these and many other issues a browser has by using Selenium’s Options. Let’s look at an example.
- from selenium import webdriver
- from selenium.webdriver.chrome.options import Options# create instance
- options = Options()#add arguments
- options.headless = True #headless mode
- options.add_argument(‘path_of_extension’) #add an extensiondriver = webdriver.Chrome(options=options)
- …
- In this simple example, we imported Options and created an instance. Then we customized the default options Selenium has to make scraping easier (based on the problem you want to solve) For example, if the web contains ads that interfere with the data you wish to scrape, you could disable ads by loading an adblocker extension and specifying where it’s located in your computer inside the
.add_argument()
You can find other things you can do with WebDriver Options in the Selenium documentation. - That’s it! With these concepts, you’re ready to easily learn web scraping. Once you extract the data, you will need to clean it. I already made a guide on how to clean data that you can find below.
- A Straightforward Guide to Cleaning and Preparing Data in Python
- How to Identify and deal with dirty data.
- towardsdatascience.com
- Join my email list with 3k+ people to get my Web Scraping Cheat Sheet I use in all my Python tutorials (Free PDF)
- Data Science
- Technology
- Python
- Education
- Productivity
- 78
- 78
- Enjoy
- 乖乖背单词是一款 跨平台UWP 支持 iPhone, iPad, XBOX,Windows 10 PC
- 实现间隔重 (Spaced repetition)复学习技术
- 神经网络语音朗读例句 词汇, Speak human, not robot
- 免费帐号注册同步 学习进度
- 独创在线下载提取影音字幕, 看美剧前提前学习生词
- 生词本多种来源倒入,从网页提取, 从文本提取
- 高质量中英文高亮
- 显示高频词根释义
- 内置 当代词汇 TOP 20000 高频单词
- 内置 雅思/托福/GRE/考研/高考/中考/CET4四级/CET6六级/自定义导入单词本
- ref:
- Spaced repetition
- https://en.wikipedia.org/wiki/Spaced_repetition
- Windows 10 Store
- https://www.microsoft.com/store/apps/9NPDV1SMQX76
- App Privacy
- See Details
- The developer, Yong Chen, indicated that the app’s privacy practices may include handling of data as described below. For more information, see the developer’s privacy policy.
- WINE
- A developers guide to running WPF apps on Linux with .NET Core and Wine
- Overview
- I have worked on several large WPF applications that took many years to create. When we started development, our users only used Windows, which made WPF a natural choice. WPF provides a modern UI and workflow that ran on all versions of Windows. Today, our customers increasingly want to use our applications on Linux, so we have been looking for a way to achieve this at an investment level that makes sense given the current size of the user base. With this shift, we are also looking to maximize the investments we have already made with our WPF applications. To move to Linux, we considered several options:
- Update the architecture of the applications to make the WPF-specific code as small as possible, and enable a per-platform UI. We could continue to use WPF on Windows and choose something else for Linux.
- Switch to a cross-platform UI stack. With libraries like QT, we could create an app that would work cross-platform.
- Switch to a HTML-based UI stack. We could rearchitect our application to be an Electron app. Much of the non-UI code could be reused, but we would have to recreate the UI from scratch in HTML / JavaScript and update our architecture to support interop between JavaScript and our existing C# code.
- Switch to some sort of cloud-hosted application. Platforms like Amazon App Stream enable hosting of existing Windows apps and enable use from any platform.
- After some evaluation, we were not happy with any of these solutions. They were either cost prohibitive or would have resulted in a less desirable application. Given the size of the Linux customer base, we needed a solution that is initially low cost, and provided a model that could evolve to support tailoring features to each platform as the user base grows. We found the solution to these problems with Wine.
- With .NET Core 3.0’s support for WPF, a WPF application can run on Linux under Wine. Wine is a compatibility layer which allows Windows applications on Linux and other OSes, include .NET Core Windows applications. More information about Wine is available at the WineHQ website.
- Wine is often used to enable users to run games on Linux. In order to support gaming, the Wine team invested in providing a full-featured implementation of DirectX. This is great for WPF, since it uses DirectX for rendering, and, from the rendering perspective, is a lot like a DirectX game.
- Wine is typically used to run applications out of the box. This is a high bar, since any missing API or behavioral difference between Wine and Windows can result in an unusable app. If you are willing to thoroughly test and make necessary application changes, you can be successful running your WPF apps on Linux. I’ve had great success getting several applications, including some very large WPF apps, running on Linux with minimal changes.
- Getting started
- Port to .NET Core
- In theory, a .NET Framework WPF application could be updated to run on Linux with Wine. However, .NET Framework’s license prohibits use on any platform other than Windows. .NET Core is open source and cross-platform. .NET Core is also where Microsoft is putting their .NET investment, so porting to .NET Core is a good idea even for Windows-only use. Given the issues with .NET Framework, the first step towards Linux support is to port your application to .NET Core. There are many great documents available on how to port a WPF application to .NET Core. Microsoft’s Migration page is a great place to start.
- It’s much easier to debug and fix issues on Windows than on Linux, so make sure your application is working well on Windows before you try it on Linux.
- Install Wine
- .NET Core WPF Apps work well with current versions of Wine, but you may run into issues with older versions. I have been testing my apps with Wine 4.21.
- Follow the instructions on the Wine Installation page to install a Wine package that is compatible with your Linux distribution. I’ve had success installing the development build available from WineHQ directly. Once Wine is installed, you need to set it up. Running
winecfg
is an easy way to get Wine to setup the Wine prefix (configuration) directory. - When setting up the Wine prefix directory, Wine will prompt you to install Mono. You do not need to install Mono .NET to run .NET Core applications, so you can cancel the install of Wine Mono. Wine Gecko is also not needed.
- Once wineconfig is running, there should be a .wine directory in your home directory:
- Nothing needs to be changed in WineConfig so it can be closed.
- Setup .NET Core on Wine
- The easiest way to install .NET Core for testing is to copy the dotnet directory from your Windows install to the Linux computer.
- Copy the entire dotnet folder from the Program Files directory on Windows:
- to the Program Files directory in the Wine configuration location:
- Install / copy your application to Linux
- Applications that can run from the build output directory can be copied from Windows to anywhere on your Linux machine. I usually copy the application into my home directory for testing. Wine also supports setting registry keys or environment variables. If your required setup has more complex requirements, you may have more difficulty, but Wine supports a surprising number of Windows features.
- Make sure fonts are available
- When testing out various applications, I often experienced odd crashes when an appropriate font was not available. For testing purposes, the easiest way to get necessary fonts is with Winetricks. Install and run Winetricks. From there, you can install fonts available from a variety of sources.
- Run your application under Wine
- Once your app is copied to the Linux machine, you can run it under Wine:
- Here is a screen capture Modern WPF example application running on Linux:
- This application runs unmodified on Linux.
- Note: I have only tested 64-bit applications.
- Calling native code
- You can customize your .NET app for Linux and call into native linux code with P/Invokes in your .NET code. The key is to create addition Wine DLLs that then call into Linux libraries.
- The easiest way I have found to do this is to download and build the Wine source and then follow the patterns of the built-in DLLs. The Wine Developer Hints page has information on how to implement a new DLL. You can follow these instructions to create a DLL for your application.
- Lets say you have a .so (examplelibrary.so) that has a method like this:
- that you want to call into. To call into it, you need to make an equivalent DLL version (winExampleLibrary) that you can then pInvoke to: