TechWire By Me ⚡️


Kanal geosi va tili: Efiopiya, Inglizcha


👀 What to Expect?
✨ Latest tech news & breakthroughs 📰
🎯 AI, cybersecurity, and software updates 💻
Stay ahead of the curve or just drop by for some quality insights. Either way, you're in the right place. 😉
📩 DM for collabs, questions, or just to geek

Связанные каналы

Kanal geosi va tili
Efiopiya, Inglizcha
Statistika
Postlar filtri


Video oldindan ko‘rish uchun mavjud emas
Telegram'da ko‘rish


🚀 Introducing Flask-Ignite: Supercharge Your Flask Development! 🔥

Flask-Ignite is a powerful extension that streamlines your Flask applications, offering enhanced performance and simplified configurations. Whether you're a seasoned developer or just starting out, Flask-Ignite provides the tools you need to build robust web applications with ease.

Key Features:

🎯 Optimized middleware integration
🎯Simplified routing mechanisms
🎯Enhanced security protocols
🎯Comprehensive documentation and support
Get started today and elevate your Flask projects to the next level! Learn more and download at: https://pypi.org/project/flask-ignite/

To contribute https://github.com/nahom-d54/flask-ignite
#Flask #Python #WebDevelopment #FlaskIgnite




🚀 LaptopHub is Live! 🎉

We’re excited to introduce LaptopHub – Ethiopia’s go-to platform for finding and comparing laptops from various Telegram sellers, all in one place! 🖥💻

🔎 What you’ll find on LaptopHub:
✅ A wide range of laptops from different sellers
✅ Easy price & spec comparison
✅ Real-time updates on the latest deals

Check it out now: LaptopHub.pro.et
Let us know what you think and share with your friends! 🚀🔥

Suggest features and telegram channels to be indexed


Setting Up CI/CD with CPANEL and FTP 🎉

🎯 Motivation

So, I was living the dream with Vercel and Heroku—no workflows, no manual uploads, just push and BOOM! Deployment done. 🎩✨ But now, working on a real project, things got... less fun.

Here in Ethiopia, shared cPanel hosting is the go-to, and let me tell you—zipping, uploading, unzipping, deleting old files—it's like a gym workout for your patience. 😩💪

But wait! There’s FTP, and it can handle half the work for us. So the big question is: HOW?! 🤔

🛠 Step 1: Create a Workflow File

First, inside your project, create a .github/workflows folder. This is where your GitHub Actions workflow will live. 🏡📂

Now, create a file with a .yml extension. Make it descriptive—mine is deploy.yml. ✅

📜 Step 2: Write the Workflow Config

Here’s a sample GitHub Actions workflow to deploy a React app with FTP:

name: 🚀 Build and Deploy React App

on:
push:
branches:
- main # Deploy on push to the main branch

jobs:
build:
name: 🏗 Build React App
runs-on: ubuntu-latest

steps:
- name: 📥 Checkout Code
uses: actions/checkout@v3

- name: 🛠 Set up Node.js
uses: actions/setup-node@v3
with:
node-version: 18 # Adjust Node.js version as needed

- name: 📦 Install Dependencies
run: npm install

- name: 🔨 Build React App
run: npm run build

- name: 📤 Deploy to FTP
uses: SamKirkland/FTP-Deploy-Action@4.1.0
with:
server: $` secrets`.`FTP_SERVER `
username: $` secrets`.`FTP_USER `
password: $` secrets`.`FTP_PASS `
local-dir: dist/
exclude: |
**/*.map # Exclude source maps
**/node_modules/** # Exclude node_modules
**/.git/** # Exclude git files
📝 Notes: This config is for React Vite, but with small tweaks, it can work for other projects too! 🚀

🔑 Step 3: Setting Up GitHub Secrets

Go to your GitHub repo.

Click Settings ⚙️ (top-right corner).

Navigate to Secrets and Variables ➝ Actions.

Click New repository secret.

Add the following secrets:

FTP_SERVER ➝ Your FTP server address 🖥

FTP_USER ➝ Your FTP username 👤

FTP_PASS ➝ Your FTP password 🔑

🎉 Step 4: Push and Deploy!

Once everything is set up, push your code to GitHub, and BAM! 💥 Your site should be deployed automatically. No more manual uploads! 🥳

🚀 Enjoy hands-free deployments, even with cPanel! 🚀

If there is anything missed comment down below and with questions or improvement suggestrions


🍾 Happy Coding


Well ChatGpt added reason feature and it is shite💩


🛒 Web Scraping Adventures

Let's chat about web scraping, especially for e-commerce websites! 🚀

After scraping 70+ international e-commerce sites, here's what I've learned:

There are only a few main types of platforms most e-commerce sites use:

💻 Salesforce Demandware
🛍️ Adobe's Magento
📦 Shopify
🔍 Algolia's Search API
...and a few others I might've forgotten! 😅

So, how do you get started with web scraping? 🤔

Here’s a simple guide:

1️⃣ Learn to use browser dev tools 🛠️
It's your best friend for understanding how websites work behind the scenes.

2️⃣ APIs are your golden ticket 🎟️
Most websites now use client-side-rendered JavaScript libraries like React. These need backend APIs, which makes scraping easier since you can directly interact with the API

3️⃣ Look for SDK documentation 📖
If the site is an e-commerce platform, chances are it uses a commercially available SDK. You can often find its documentation online, making your code cleaner and less error-prone.

Here are some additional guides you can include to make your explanation more comprehensive:

Advanced Web Scraping Tips 🧠

1️⃣ Use Proxies & Rotating IPs 🕶️
Many websites detect and block scraping attempts if too many requests come from the same IP. Use tools like Scrapy-rotating-proxy

2️⃣ Headers & User-Agent Spoofing 📜
Mimic a real browser by adding proper headers (e.g., User-Agent, Accept, Referrer). This reduces the chance of being flagged as a bot.

3️⃣Learn Regex or Use AI for Precise Scraping 🔍
Sometimes you need to extract specific data from messy text. Regular Expressions (Regex) are invaluable for this!

Debugging & Optimization for Scraping 🐛⚙️

1️⃣ Learn How to Handle Pagination 🔄

Most e-commerce sites have multiple pages of products. Look for pagination patterns like:
/page=2, /offset=20, or AJAX requests loading the next page.


2️⃣ Use Headless Browsers Only When Necessary 🖥️➡️🚫

Tools like Selenium or Puppeteer can be heavy. Stick to requests or APIs unless JavaScript-rendered content forces you to use a headless browser.


3️⃣ Optimize Your Code for Speed ⚡

Use libraries like requests or httpx to scrape asynchronously, speeding up the process significantly.


Logging & Error Handling 🪵🚨

Add Proper Logs: Use libraries like loguru to track errors and requests.

Retry Logic: Implement retry mechanisms for failed requests with libraries like tenacity.

Error Handling: Handle HTTP errors (e.g., 403, 404) gracefully without breaking your script.

Handy Tools 🛠️

Here are some tools that can make your scraping journey smoother:
🐍 BeautifulSoup (Python) – Great for parsing HTML.
🚀 Selenium – Perfect for scraping JavaScript-heavy websites.
📦 Playwright/Puppeteer/ Selenium – For headless browser automation.
📡 Postman – Helps you explore and test APIs before scraping.

Here is an example code
https://github.com/nahom-d54/BestBuyscraper


I was scrolling through X and there was a lot of discussions about deepseek and how it is compered to open ai's o1 model

And I have never tried o1 model before for financial and necessity purpose so I downloaded and tried deepseek
And daymn it's amazing and scary too and it made me relize how much things are changing anyways check out this masterpiece


Added simmilarity recommendation system

Next up search and filter


Hello subscribers of my telegram channel I'm excited to share my new personal project
Tops

Which scrapes laptops a from across different telegram channels and puts it into one big website

It's still in progress and only backend

You can check it out here

Any suggestion and feature recommendations are extremely welcome

10 ta oxirgi post ko‘rsatilgan.