百度360必应搜狗淘宝本站头条
当前位置:网站首页 > 技术分类 > 正文

Python 中文件比较和合并的几种有效策略

ztj100 2025-02-18 14:24 9 浏览 0 评论

在日常编程或数据分析任务中,处理比较和合并多个文件是很常见的。Python 具有强大的文件处理能力和广泛的库支持,是处理此类任务的理想选择。

下面,我们将探讨几种有效的文件比较和合并策略,每种策略都附有详细的代码示例和解释。

  1. 基本文件读写

首先,了解如何读取和写入文件是基础。

# Open and read content from the input file
with open('input_file.txt', 'r') as input_file:  
    data = input_file.readlines()  # Read all lines from the input file

# Open the output file and write the content into it
with open('output_file.txt', 'w') as output_file:  
    for line in data:  
        output_file.write(line)  # Write each line to the output file

2. 文件内容比较

使用 difflib 库来比较两个文件之间的差异。

# Import the difflib module for file comparison
import difflib  

# Open and read the first input file
with open('input_file1.txt', 'r') as input_file1, open('input_file2.txt', 'r') as input_file2:
    
    # Compare the content of the two files using unified_diff
    diff = difflib.unified_diff(input_file1.readlines(), input_file2.readlines())
    
    # Print the differences line by line
    print('\n'.join(diff))

3. 合并 CSV 文件

对于 CSV 文件,pandas 库可用于合并操作。

# Import pandas library for data manipulation
import pandas as pd  

# Read the first CSV file into a DataFrame
df1 = pd.read_csv('data_file1.csv')  

# Read the second CSV file into a DataFrame
df2 = pd.read_csv('data_file2.csv')  

# Merge the two DataFrames by concatenating them, assuming matching column names
merged_df = pd.concat([df1, df2], ignore_index=True)  

# Save the merged DataFrame to a new CSV file
merged_df.to_csv('output_merged.csv', index=False)

4. 逐列 CSV 合并

合并特定列,例如基于公共键联接文件。

# Import pandas library for data manipulation
import pandas as pd  

# Read the first CSV file into a DataFrame
df1 = pd.read_csv('data_file1.csv')  

# Read the second CSV file into a DataFrame
df2 = pd.read_csv('data_file2.csv')

# Merge the two DataFrames based on a common column named 'common_key'
# 'how="outer"' ensures that all rows from both DataFrames are included, 
# with missing values filled as NaN where data does not match
merged_df = pd.merge(df1, df2, on='common_key', how='outer')  

# Save the merged DataFrame to a new CSV file
merged_df.to_csv('output_merged_by_key.csv', index=False)  

5. 基于行的合并

当基于相似行结构合并文件时,直接迭代和追加行。

# Initialize an empty list to store the content from all input files
data = []  

# List of input text files to be read and merged
for filename in ['input_file1.txt', 'input_file2.txt']:  
    # Open each file in read mode
    with open(filename, 'r') as file:  
        # Read all lines from the current file and add them to the data list
        data.extend(file.readlines())  

# Open the output file in write mode
with open('output_merged_file.txt', 'w') as merged_file:  
    # Write each line from the data list into the output file
    for line in data:  
        merged_file.write(line)

6. 去重合并

使用 sets 在合并之前删除重复的行。

# Initialize a set to store unique lines from all input files
unique_lines = set()  

# List of input text files to be read and merged
for filename in ['input_file1.txt', 'input_file2.txt']:  
    # Open each file in read mode
    with open(filename, 'r') as file:  
        # Add all lines from the current file to the set (duplicates are automatically removed)
        unique_lines.update(file.readlines())  

# Open the output file in write mode
with open('output_merged_unique.txt', 'w') as merged_file:  
    # Sort the unique lines to ensure consistent output order
    for line in sorted(unique_lines):  
        # Write each unique line into the output file
        merged_file.write(line)

7. 文本文件的二进制比较

使用 filecmp 模块比较文件的二进制内容。

# Import the filecmp module for file comparison
import filecmp  

# Compare the binary contents of 'input_file1.txt' and 'input_file2.txt'
if filecmp.cmp('input_file1.txt', 'input_file2.txt'):  
    print("Files are identical.")  # Output message if files are identical
else:
    print("Files differ.")  # Output message if files differ

8. 大文件高效比对

对于大型文件,请逐行读取和比较它们以节省内存。

# Open the first large file ('input_large_file1.txt') and second large file ('input_large_file2.txt') for reading
with open('input_large_file1.txt', 'r') as f1, open('input_large_file2.txt', 'r') as f2:  
    
# Read lines from both files simultaneously and compare them
    for line1, line2 in zip(f1, f2):  
        # If a difference is found between the two lines, print a message and stop the comparison
        if line1 != line2:  
            print("Difference found!")  
            break  # Exit the loop as the first difference has been found

9. 多个文件的动态合并

使用循环动态合并文件路径列表中的文件。

# Generate a list of file paths for input files ('input_file1.txt' to 'input_file3.txt')
file_paths = ['input_file{}.txt'.format(i) for i in range(1, 4)]  

# Open the output file ('output_merged_all.txt') in write mode
with open('output_merged_all.txt', 'w') as merged:  
    # Iterate through the list of input file paths
    for path in file_paths:  
        # Open each file in read mode
        with open(path, 'r') as file:  
            # Write the content of the current file to the merged output file
            # Add a newline character to separate the content of different files
            merged.write(file.read() + '\n')

10. 高级合并策略:智能合并

对于更复杂的合并标准,例如按日期或 ID 合并,请在合并之前对数据进行排序。

# Import pandas library for data manipulation
import pandas as pd  

# Read CSV files ('input_file1.csv' and 'input_file2.csv') into DataFrames
dfs = [pd.read_csv(f) for f in ['input_file1.csv', 'input_file2.csv']]  

# Concatenate the DataFrames and sort by the 'date_column', assuming it's the column holding the date data
sorted_df = pd.concat(dfs).sort_values(by='date_column')  

# Save the merged and sorted DataFrame to a new CSV file
# Import pandas library for data manipulation
sorted_df.to_csv('output_smart_merged.csv', index=False)  

相关推荐

告别手动操作:一键多工作表合并的实用方法

通常情况下,我们需要将同一工作簿内不同工作表中的数据进行合并处理。如何快速有效地完成这些数据的整合呢?这主要取决于需要合并的源数据的结构。...

【MySQL技术专题】「优化技术系列」常用SQL的优化方案和技术思路

概述前面我们介绍了MySQL中怎么样通过索引来优化查询。日常开发中,除了使用查询外,我们还会使用一些其他的常用SQL,比如INSERT、GROUPBY等。对于这些SQL语句,我们该怎么样进行优化呢...

9.7寸视网膜屏原道M9i双系统安装教程

泡泡网平板电脑频道4月17日原道M9i采用Win8安卓双系统,对于喜欢折腾的朋友来说,刷机成了一件难事,那么原道M9i如何刷机呢?下面通过详细地图文,介绍原道M9i的刷机操作过程,在刷机的过程中,要...

如何做好分布式任务调度——Scheduler 的一些探索

作者:张宇轩,章逸,曾丹初识Scheduler找准定位:分布式任务调度平台...

mysqldump备份操作大全及相关参数详解

mysqldump简介mysqldump是用于转储MySQL数据库的实用程序,通常我们用来迁移和备份数据库;它自带的功能参数非常多,文中列举出几乎所有常用的导出操作方法,在文章末尾将所有的参数详细说明...

大厂面试冲刺,Java“实战”问题三连,你碰到了哪个?

推荐学习...

亿级分库分表,如何丝滑扩容、如何双写灰度

以下是基于亿级分库分表丝滑扩容与双写灰度设计方案,结合架构图与核心流程说明:一、总体设计目标...

MYSQL表设计规范(mysql表设计原则)

日常工作总结,不是通用规范一、表设计库名、表名、字段名必须使用小写字母,“_”分割。...

怎么解决MySQL中的Duplicate entry错误?

在使用MySQL数据库时,我们经常会遇到Duplicateentry错误,这是由于插入或更新数据时出现了重复的唯一键值。这种错误可能会导致数据的不一致性和完整性问题。为了解决这个问题,我们可以采取以...

高并发下如何防重?(高并发如何防止重复)

前言最近测试给我提了一个bug,说我之前提供的一个批量复制商品的接口,产生了重复的商品数据。...

性能压测数据告诉你MySQL和MariaDB该怎么选

1.压测环境为了尽可能的客观公正,本次选择同一物理机上的两台虚拟机,一台用作数据库服务器,一台用作运行压测工具mysqlslap,操作系统均为UbuntuServer22.04LTS。...

屠龙之技 --sql注入 不值得浪费超过十天 实战中sqlmap--lv 3通杀全国

MySQL小结发表于2020-09-21分类于知识整理阅读次数:本文字数:67k阅读时长≈1:01...

破防了,谁懂啊家人们:记一次 mysql 问题排查

作者:温粥一、前言谁懂啊家人们,作为一名java开发,原来以为mysql这东西,写写CRUD,不是有手就行吗;你说DDL啊,不就是设计个表结构,搞几个索引吗。...

SpringBoot系列Mybatis之批量插入的几种姿势

...

MySQL 之 Performance Schema(mysql安装及配置超详细教程)

MySQL之PerformanceSchema介绍PerformanceSchema提供了在数据库运行时实时检查MySQL服务器的内部执行情况的方法,通过监视MySQL服务器的事件来实现监视内...

取消回复欢迎 发表评论: